Row of a matrix is a vector along the same degree of freedom

Gear300
Messages
1,209
Reaction score
9
A particular introduction to matrices involved viewing them as an array/list of vectors (column vectors) in Rn. The problem I see in this is that it is sort of like saying that a row of a matrix is a vector along the same degree of freedom (elements of the same row are elements of different vectors all in the same dimension). So from this, technically, the scalar product of a column vector v and row1 of a matrix A should only exist as a product between the elements of row1 of the matrix A and row1 of the column vector v...which doesn't seem right (since matrix-vector multiplication Av is defined as a column vector of dot products between the vector v and rows of A). How would one geometrically interpret a matrix?
 
Last edited:
Mathematics news on Phys.org
Gear300 said:
… So from this, technically, the scalar product of a column vector v and row1 of a matrix A should only exist as a product between the elements of row1 of the matrix A and row1 of the column vector v...

which doesn't seem right (since matrix-vector multiplication Av is defined as a column vector of dot products between the vector v and rows of A).

Hi Gear300! :smile:

Technically, row vectors are transpose vectors,

so the first row of A is is not the vector a1 (say), but the transpose vector a1T.

Then your scalar product in matrix form is (a1T)v,

but in ordinary form is a.v :wink:
How would one geometrically interpret a matrix?

Dunno :redface:, except that I always think of a matrix as being a rule that converts one vector into another vector. :smile:
 


I understand matrices as operators, like you may have basis vectors (x,y,z all normal), you can have basis functions (such as sin(nx) for n=1,2,3...) which any periodic function can be decomposed into using Fourier series. Then the function can be represented completely by a column vector containing the amplitude of each frequency, and the matrix multiplication will output a new function. So in that sense you might represent a matrix geometrically by a series of "before and after" functions!

Then again this is just fun speculation... the most fundamental way of expressing a matrix geometrically is probably by a discrete 2-D plot, f(m,n) = the (m,n)th element of the matrix.
 


I see...good stuff so far...Thanks for all the replies.
 
Hi Gear300! :smile:

I've found http://farside.ph.utexas.edu/teaching/336k/lectures/node78.html#mom" on john baez's excellent website which shows the relation between the ordinary and the matrix representation of the same equation, and where the transpose fits in …

if ω is the (instantaneous) angular velocity, then the rotational kinetic energy is both:

1/2 ω.L
and (without the "dot")
1/2 ωTL = 1/2 ωTÎω

where L is the (instantaneous) angular momentum, and Î is the moment of inertia tensor, both measured relative to the centre of mass.
 
Last edited by a moderator:


Thanks a lot for the link tiny-tim.

So, its sort of like saying that for an m x p matrix with n being a dimensional space, you could label the rows (going down) from n = 1 to n = m and you could also label the columns (going across) from n = 1 to n = p, right?
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top