MHB Why Is There No Division for Vectors?

  • Thread starter Thread starter find_the_fun
  • Start date Start date
  • Tags Tags
    Vectors
Click For Summary
SUMMARY

The discussion addresses the absence of a straightforward division operation for vectors and matrices, emphasizing that division is not well-defined for non-square matrices. It explains that while matrix inversion can solve linear systems, it only applies to square matrices, and division by matrices can lead to contradictions similar to dividing by zero. The key takeaway is that vector division is feasible only in specific dimensions (1, 2, 4, and under certain conditions, 8 and 16), as established by Frobenius' theorem.

PREREQUISITES
  • Understanding of matrix algebra, specifically matrix inversion.
  • Familiarity with linear systems and the concept of determinants.
  • Knowledge of vector spaces and dimensionality in mathematics.
  • Basic concepts of scalar multiplication and its properties.
NEXT STEPS
  • Study the properties of matrix inversion and determinants in linear algebra.
  • Explore Frobenius' theorem and its implications in higher-dimensional spaces.
  • Learn about the programming language APL and its applications in mathematical operations.
  • Investigate the concept of matrix multiplication and its non-commutative nature.
USEFUL FOR

Mathematicians, computer graphics developers, and students of linear algebra seeking to understand the limitations of vector and matrix operations, particularly in the context of division and dimensionality.

find_the_fun
Messages
147
Reaction score
0
I was reading my computer graphics textbook and under frequently asked questions one was "why is there no vector division?" and it said "it turns out there is no 'nice' way to divide vectors". That's not a very good explanation. Why is it that matrices can't be divided?
 
Physics news on Phys.org
You can sort of divide square matrices. Suppose you have $Ax=b$, where $A$ is a matrix and $x,b$ are vectors. Then you left multiply both sides by $A^{-1}$ (assuming it exists, which it will so long as $\det(A)\not=0$). Then you get $A^{-1}Ax=A^{-1}b$, and since $A^{-1}A=I$, then you have $Ix=A^{-1}b$, or $x=A^{-1}b$. It is sometimes possible (though not the most efficient method) to solve a linear system this way.

However, this sort of inverse only works with square matrices, because you need $A^{-1}A$ to be the right size. Since a vector (aside from 1 x 1 matrices, also known as numbers) is not a square matrix, you cannot do this kind of inversion. The key here is that you're trying to achieve some sort of multiplicative identity, $I$ in this case. You can't do that with non-square matrices like vectors.
 
The answer is: you CAN, but only in certain dimensions, under certain limited circumstances.

First of all, for "division" to even make sense, you need some kind of multiplication, first. And this multiplication has to be of the form:

vector times vector = same kind of vector.

It turns out that this is only possible in certain dimensions: 1,2,4 (and if you allow certain "strangenesses" 8 and 16). This is a very "deep" theorem, due to Frobenius, and requires a bit of high-powered algebra to prove.

Now matrices only have such a multiplication when they are nxn (otherwise we get:

matrix times matrix = matrix of different size, which turns out to matter).

However, it turns out we can have "bad matrices", like so:

$AB = 0$ where neither $A$ nor $B$ are the 0-matrix. For example:

$A = \begin{bmatrix}1&0\\0&0 \end{bmatrix}$

$B = \begin{bmatrix}0&0\\0&1 \end{bmatrix}$

Now suppose, just for the sake of argument, we had a matrix we could call:

$\dfrac{1}{A}$.

Such a matrix should satisfy:

$\dfrac{1}{A}A = I$, the identity matrix.

Then:

$B = IB = \left(\dfrac{1}{A}A\right)B = \dfrac{1}{A}(AB) = \dfrac{1}{A}0 = 0$

which is a contradiction, since $B \neq 0$

In other words, "dividing by such a matrix" is rather like dividing by zero, it leads to nonsense.

It turns out the the condtition:

$AB = 0, A,B \neq 0$

is equivalent to:

$Av = 0$ for some vector $v \neq 0$.

Let's see why this is important by comparing matrix multiplication with scalar multiplication:

If $rA = rB$, we have:

$\dfrac{1}{r}(rA) = \left(\dfrac{1}{r}r\right)A = 1A = A$

and also:

$\dfrac{1}{r}(rA) = \dfrac{1}{r}(rB) = \left(\dfrac{1}{r}r\right)B = 1B = B$

provided $r \neq 0$ (which is almost every scalar).

This allows us to conclude $A = B$, in other words, the assignment:

$A \to rA$ is one-to-one.

However, if we take matrices:

$RA = RB$ does NOT imply $A = B$, for example let

$R = \begin{bmatrix} 1&0\\0&0 \end{bmatrix}$

$A = \begin{bmatrix} 0&0\\0&1 \end{bmatrix}$

$B = \begin{bmatrix} 0&0\\0&2 \end{bmatrix}$

Then we see, $RA = RB = 0$, but clearly $A$ and $B$ are different matrices.

So "left-multiplication by a matrix" is no longer 1-1, we means we can't uniquely "undo" it (which is what, at its heart, "division" is: the "un-doing" of multiplication).

I hope this made sense to you.
 
Deveno said:
The answer is: you CAN, but only in certain dimensions, under certain limited circumstances.

...

vector times vector = same kind of vector.

It turns out that this is only possible in certain dimensions: 1,2,4 (and if you allow certain "strangenesses" 8 and 16).

...

In other words, "dividing by such a matrix" is rather like dividing by zero, it leads to nonsense.

...

So "left-multiplication by a matrix" is no longer 1-1, we means we can't uniquely "undo" it (which is what, at its heart, "division" is: the "un-doing" of multiplication).
And despite all this, one can divide by almost all square matrices of any dimension, and not just having 1, 2, 4, 8 or 16 components.
 
There's a programming language called APL, which was designed for applying math.

Regular division is denoted a÷b.
The same operator is used for the reciprocal: ÷b means 1/b.

Typically operations for matrices are denoted with a square block around the operator.
In particular the matrix inverse is ⌹B.
And matrix division is: A⌹B. This means A multiplied by the inverse of B.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 47 ·
2
Replies
47
Views
6K
  • · Replies 16 ·
Replies
16
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
7K
  • · Replies 0 ·
Replies
0
Views
8K
  • · Replies 11 ·
Replies
11
Views
4K
Replies
2
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K