Multiplication Of Vectors with a Vector As An Element

Click For Summary
The discussion focuses on the multiplication of a vector containing another vector as an element and the implications for matrix multiplication. It questions whether the operation cTc is defined and how to interpret the product a*a, considering matrix multiplication rules. The conversation clarifies that when taking the dot product of two vectors, it involves transposing one vector and performing matrix multiplication, which is valid if the dimensions align. It emphasizes that for the dot product to be well-defined, the inner vector must be a column vector. Overall, the thread explores the nuances of vector notation and the conditions for valid vector operations.
Fribbles
Messages
3
Reaction score
0
Hi,

If I have a vector c = [ a , 1 ]T where the element a is a vector. If I multiply c by its transpose:

cTc

is this defined? How do I calculate the a*a? Matrix multiplication rules would say that a * a is undefined because it is a nx1 matrix multiplied by a nx1 matrix. Or is the convention that it is the dot product?

Thank you in advance for your help!
 
Physics news on Phys.org
In order to write vectors like that, you will need to think of the original vector, v, as a column matrix, say, v= \begin{bmatrix} x\\ y\end{bmatrix} so that its transpose is a row matrix: v*= \begin{bmatrix}x & y\end{bmatrix}. Then the product, v*u, is the matrix product \begin{bmatrix} x & y\end{bmatrix}\begin{bmatrix}a \\ b \end{bmatrix}= ax+ by.

Of course, that is only notation. If you want to write the vector v as a row, fine. Then its transpose is a column and you have to write the product in the other order. Or you can go with the more abstract concept- given a vector in a vector space, V, there exist an isomorphism from V to its dual, V*, the set of linear functions from V to the underlying field. In that case "v*u" is the linear function corresponding to v applied the vector u.
 
Are you sure this is a well-defined vector, though?
The matrix notation, \vec{u} = \begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{bmatrix} is defined such that x_1, \dots, x_n are the components of your vector in a given basis of your vector space, so they must be scalars.
That's why I don't think having a vector there makes much sense.
 
Thank you for the replies...so then does, a, have to be a column vector?
 
when you take the dot product of two vectors a.b what you're really doing is the transpose of a and then doing matrix multiplication

so a.b === a^t b

if a is 1xn and b is nx1 then a^t is 1xn and so the matrix multiplication is defined (and gives you back a 1x1 matrix aka a real number)
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • · Replies 33 ·
2
Replies
33
Views
2K
Replies
27
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K