Row of a matrix is a vector along the same degree of freedom

AI Thread Summary
Matrices can be viewed as arrays of vectors, where rows are transposed vectors, leading to the interpretation that the scalar product of a column vector and a row of a matrix involves the transpose of the row vector. This raises questions about the geometric interpretation of matrices, which some view as operators that convert one vector into another, akin to a rule for transforming functions. The discussion also touches on representing matrices geometrically through discrete plots or transformations in function space. Additionally, the relationship between ordinary and matrix representations is highlighted, particularly in the context of angular momentum and kinetic energy equations. Overall, the conversation explores the complexities of matrix representation and their geometric implications.
Gear300
Messages
1,209
Reaction score
9
A particular introduction to matrices involved viewing them as an array/list of vectors (column vectors) in Rn. The problem I see in this is that it is sort of like saying that a row of a matrix is a vector along the same degree of freedom (elements of the same row are elements of different vectors all in the same dimension). So from this, technically, the scalar product of a column vector v and row1 of a matrix A should only exist as a product between the elements of row1 of the matrix A and row1 of the column vector v...which doesn't seem right (since matrix-vector multiplication Av is defined as a column vector of dot products between the vector v and rows of A). How would one geometrically interpret a matrix?
 
Last edited:
Mathematics news on Phys.org
Gear300 said:
… So from this, technically, the scalar product of a column vector v and row1 of a matrix A should only exist as a product between the elements of row1 of the matrix A and row1 of the column vector v...

which doesn't seem right (since matrix-vector multiplication Av is defined as a column vector of dot products between the vector v and rows of A).

Hi Gear300! :smile:

Technically, row vectors are transpose vectors,

so the first row of A is is not the vector a1 (say), but the transpose vector a1T.

Then your scalar product in matrix form is (a1T)v,

but in ordinary form is a.v :wink:
How would one geometrically interpret a matrix?

Dunno :redface:, except that I always think of a matrix as being a rule that converts one vector into another vector. :smile:
 


I understand matrices as operators, like you may have basis vectors (x,y,z all normal), you can have basis functions (such as sin(nx) for n=1,2,3...) which any periodic function can be decomposed into using Fourier series. Then the function can be represented completely by a column vector containing the amplitude of each frequency, and the matrix multiplication will output a new function. So in that sense you might represent a matrix geometrically by a series of "before and after" functions!

Then again this is just fun speculation... the most fundamental way of expressing a matrix geometrically is probably by a discrete 2-D plot, f(m,n) = the (m,n)th element of the matrix.
 


I see...good stuff so far...Thanks for all the replies.
 
Hi Gear300! :smile:

I've found http://farside.ph.utexas.edu/teaching/336k/lectures/node78.html#mom" on john baez's excellent website which shows the relation between the ordinary and the matrix representation of the same equation, and where the transpose fits in …

if ω is the (instantaneous) angular velocity, then the rotational kinetic energy is both:

1/2 ω.L
and (without the "dot")
1/2 ωTL = 1/2 ωTÎω

where L is the (instantaneous) angular momentum, and Î is the moment of inertia tensor, both measured relative to the centre of mass.
 
Last edited by a moderator:


Thanks a lot for the link tiny-tim.

So, its sort of like saying that for an m x p matrix with n being a dimensional space, you could label the rows (going down) from n = 1 to n = m and you could also label the columns (going across) from n = 1 to n = p, right?
 
Last edited:
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Thread 'Unit Circle Double Angle Derivations'
Here I made a terrible mistake of assuming this to be an equilateral triangle and set 2sinx=1 => x=pi/6. Although this did derive the double angle formulas it also led into a terrible mess trying to find all the combinations of sides. I must have been tired and just assumed 6x=180 and 2sinx=1. By that time, I was so mindset that I nearly scolded a person for even saying 90-x. I wonder if this is a case of biased observation that seeks to dis credit me like Jesus of Nazareth since in reality...

Similar threads

Back
Top