Work on unit vector notation for matrices?

Click For Summary
Recent discussions on representing matrices in unit vector notation highlight the potential for expressing matrices as linear combinations of unit vectors, similar to vector representations. Key points include the relationship between matrix properties such as determinants and norms, particularly in the context of orthogonal matrices that preserve vector lengths. The conversation also touches on tensor theory and dyadic notation as useful frameworks for understanding these representations. Additionally, one participant shares their experience of having a paper accepted on this topic, indicating ongoing research interest. Overall, the dialogue emphasizes the mathematical intricacies and applications of matrix representation in unit vector notation.
dimension10
Messages
371
Reaction score
0
I would like to inquire whether there has been any recent work on representing matrices in unit vector notation?

Thanks in advance!
 
Physics news on Phys.org
dimension10 said:
I would like to inquire whether there has been any recent work on representing matrices in unit vector notation?

Thanks in advance!

Hey dimension10.

I'm not exactly sure what you are getting at, but I'll throw a few comments in.

You can look at the level of how the matrix 'stretches' something in terms of the hyper-volume which is given by the determinant.

Matrices also have norms just like vectors, so you may want to look into this as well.

There are also a special group of matrices called the the orthogonal and special orthogonal group and if you make sure that the transpose equals the inverse, then you have what is called a rotation group which has determinant 1. These matrices preserve the distance of the point to the origin under all valid rotation transformations, so the length of the vector that is applied to the operator doesn't change it's length.
 
chiro said:
Hey dimension10.

I'm not exactly sure what you are getting at, but I'll throw a few comments in.

You can look at the level of how the matrix 'stretches' something in terms of the hyper-volume which is given by the determinant.

Matrices also have norms just like vectors, so you may want to look into this as well.

There are also a special group of matrices called the the orthogonal and special orthogonal group and if you make sure that the transpose equals the inverse, then you have what is called a rotation group which has determinant 1. These matrices preserve the distance of the point to the origin under all valid rotation transformations, so the length of the vector that is applied to the operator doesn't change it's length.

Thanks for the info, but what I was really was asking is whether there is any way to represent a matrix as a linear or non-linear combination of unit vectors. For e.g.

\left[ \begin{array}{l}<br /> a\\<br /> b<br /> \end{array} \right] = a{{\bf{\hat e}}_1} + b{{\bf{\hat e}}_2}

I was wondering if there is any work on how to represent a matrix in a similar way?
 
dimension10 said:
Thanks for the info, but what I was really was asking is whether there is any way to represent a matrix as a linear or non-linear combination of unit vectors. For e.g.

\left[ \begin{array}{l}<br /> a\\<br /> b<br /> \end{array} \right] = a{{\bf{\hat e}}_1} + b{{\bf{\hat e}}_2}

I was wondering if there is any work on how to represent a matrix in a similar way?

If the basis vectors are orthogonal, then is basically a rotation group in any dimension with determinant 1 where R_inverse = R^T (R transpose).

Also before I forget, if they are not orthonormal then find a transformation from cartesian co-ordinates to your basis (if it's a curved geometry use tensor theory), and then go from there.

Have you studied tensors?
 
chiro said:
If the basis vectors are orthogonal, then is basically a rotation group in any dimension with determinant 1 where R_inverse = R^T (R transpose).

Also before I forget, if they are not orthonormal then find a transformation from cartesian co-ordinates to your basis (if it's a curved geometry use tensor theory), and then go from there.

Have you studied tensors?

Thanks, I got what you are saying. I know some basic tensor theory though not much.
 
A linear operator is a function, not a vector. A particularly simple operator could be represented by \underline F(a)=(a\cdot f) g for vectors a,f,g, but in general, they're more complicated.
 
\left[ \begin{array}{l}<br /> a &amp; b\\<br /> c &amp; d<br /> \end{array} \right] = a{{\bf{\hat e}}_{11}} + b{{\bf{\hat e}}_{12}} + c{{\bf{\hat e}}_{21}} + d{{\bf{\hat e}}_{22}}
 
I like Serena said:
\left[ \begin{array}{l}<br /> a &amp; b\\<br /> c &amp; d<br /> \end{array} \right] = a{{\bf{\hat e}}_{11}} + b{{\bf{\hat e}}_{12}} + c{{\bf{\hat e}}_{21}} + d{{\bf{\hat e}}_{22}}

Thanks.
 
I like Serena said:
\left[ \begin{array}{l}<br /> a &amp; b\\<br /> c &amp; d<br /> \end{array} \right] = a{{\bf{\hat e}}_{11}} + b{{\bf{\hat e}}_{12}} + c{{\bf{\hat e}}_{21}} + d{{\bf{\hat e}}_{22}}

Artin uses this notation in first chapter of Algebra. So there you can find discussion and exercises, which I at least found challenging.

I was recently reading this in self-study, and I didn't think to relate this to idea of tensors and vectors, so I'm glad you brought this up to discussion.
 
  • #10
It might be helpful to read about Dyadic notation.

That Wikipedia article doesn't mention it, but the same idea can be written using Dirac notation for outer products. If
##
\{ |1 \rangle, \ldots, | N \rangle \}
##
is an orthonormal basis for an ##N## dimensional space, then any linear operator on that space can be written

##
\quad a_{11} | 1 \rangle \langle 1 | + a_{12} | 1 \rangle \langle 2 | + \cdots + a_{1N} | 1 \rangle \langle N |
##
##
+ a_{21} | 2 \rangle \langle 1 | + a_{22} | 2 \rangle \langle 2 | + \cdots + a_{2N} | 2 \rangle \langle N |
##
##
+ \cdots
##
##
+ a_{N1} | N \rangle \langle 1 | + a_{N2} | N \rangle \langle 2 | + \cdots + a_{NN} | N \rangle \langle N |
##

In the given basis, this is the operator represented by the matrix
##
A =
\left[\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1N} \\
a_{21} & a_{22} & \cdots & a_{2N} \\
\cdots \\
a_{N1} & a_{N2} & \cdots & a_{NN} \\
\end{array}\right]
##
 
  • #11
I like Serena said:
\left[ \begin{array}{l}<br /> a &amp; b\\<br /> c &amp; d<br /> \end{array} \right] = a{{\bf{\hat e}}_{11}} + b{{\bf{\hat e}}_{12}} + c{{\bf{\hat e}}_{21}} + d{{\bf{\hat e}}_{22}}

But isn't ##\mathbf{\hat{e}}_{nn}=\mathbf{\hat{e}}_n\wedge \mathbf{\hat{e}}_n=0##?
 
  • #12
dimension10 said:
But isn't ##\mathbf{\hat{e}}_{nn}=\mathbf{\hat{e}}_n\wedge \mathbf{\hat{e}}_n=0##?

Hmm, I presume you mean the outer product with your wedge.
But the result of an outer product would be a vector, not a matrix.
So no, that is not ##\mathbf{\hat{e}}_{nn}##.

##\mathbf{\hat{e}}_{ij}## is defined as the matrix with a 1 on position (i,j) and 0 everywhere else.
 
  • #13
Why would you want to represent matrices in vector notation anyway?

I like Serena said:
I presume you mean the outer product with your wedge.
I don't know why but that made me grin :redface:
 
  • #14
dimension10 said:
But isn't ##\mathbf{\hat{e}}_{nn}=\mathbf{\hat{e}}_n\wedge \mathbf{\hat{e}}_n=0##?

As I Like Serena says, in this context it's meant more like a dyad than as a multivector. I think I've seen that notation used for wedge products of unit vectors, too, though. I've also seen it used for the geometric product, in which case e_{nn} = 1.
 
  • #15
Ok, thanks everyone. I got it clear now. I was asking because I had submitted a paper to a journal about representing matrices in unit vector notation and it was accepted (yay! my 1st publication) and they had given me a comment to add more references and thus I was asking about that. In the end, I ignored the comment but at least I did learn something!
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
10
Views
2K
  • · Replies 12 ·
Replies
12
Views
5K