Understanding Matrices and Vectors: Operations, Significance, and Applications

  • Thread starter Thread starter okkvlt
  • Start date Start date
  • Tags Tags
    Matrices Work
okkvlt
Messages
53
Reaction score
0
Say i want to find [xa,ya,za]*[xb,yb,zb]

I could use pythagoras to find |a| and |b|. Then use pythagoras again to find the distance between the endpoints of a and b. Then use law of cosines to find the angle formed by a and b, then do |a||b|cos(angle).
Or, i could just do xa*xb+ya*yb+za*zb.
How are all these operations compressed into such a simple form?

(By the way, the dot product in 2 dimensions is the area of the parallelogram formed by the endpoints of a+b, a, b, and 0, right?) Then the dot product of vectors in 3 dimensions is the area of the parallelogram squared to get a 3 dimensional shape, right? Whats the signifigance of a negative dot product?


And if i want to find the cross product, all i have to do is arrange them like this
xa you za
xb yb zb

And for each coordinate of the cross product i just remove that column and find the determinant of the remaining 2x2 matrix, reversing the sign for y. Doing it otherwise, i would have to do a lot of complicated things, especially finding that perpendicular unit vector.


Also, how do matrix determinants work in finding the solution to systems of equations?
How do matrices and vectors work?
 
Physics news on Phys.org
okkvlt said:
Say i want to find [xa,ya,za]*[xb,yb,zb]

I could use pythagoras to find |a| and |b|. Then use pythagoras again to find the distance between the endpoints of a and b. Then use law of cosines to find the angle formed by a and b, then do |a||b|cos(angle).
Or, i could just do xa*xb+ya*yb+za*zb.
How are all these operations compressed into such a simple form?
You could "prove" it by letting defining b with respect to a. Use a as the reference vector a_x i and try to find another way to express axbx. of course this prove doesn't work in R^n where n > 3 and it becomes impossible to visualise the vectors geometrically.

(By the way, the dot product in 2 dimensions is the area of the parallelogram formed by the endpoints of a+b, a, b, and 0, right?) Then the dot product of vectors in 3 dimensions is the area of the parallelogram squared to get a 3 dimensional shape, right?
Actually the area of the parallelogram is given by the determinant of the matrix consisting of the column vectors of a,b. In three dimensions, it is also the determinant of 3 (linearly independent) vectors which gives the volume of the parallelopiped.

Whats the signifigance of a negative dot product?
Well it just means that the the projection of one vector onto another is pointing in the opposite direction.

And if i want to find the cross product, all i have to do is arrange them like this
xa you za
xb yb zb

And for each coordinate of the cross product i just remove that column and find the determinant of the remaining 2x2 matrix, reversing the sign for y. Doing it otherwise, i would have to do a lot of complicated things, especially finding that perpendicular unit vector.
The cross product of \textbf{a}, \textbf{b} is \textbf{a} \times \textbf{b} = \left| \begin{array}{ccc}i&j&k\\a_x&a_y&a_z\\b_x&b_y&b_z \end{array} \right|.

Also, how do matrix determinants work in finding the solution to systems of equations? How do matrices and vectors work?
You can pick up any introductory linear algebra textbook for a good introduction. I read Elementary Linear Algebra by Anton, 9th edn.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top