Why Is There No Division for Vectors?

  • Context: MHB 
  • Thread starter Thread starter find_the_fun
  • Start date Start date
  • Tags Tags
    Vectors
Click For Summary

Discussion Overview

The discussion centers on the question of why division is not defined for vectors and matrices, exploring the mathematical principles and limitations surrounding this topic. Participants delve into the implications of matrix operations, particularly focusing on the conditions under which certain types of division might be considered.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants note that traditional explanations for the absence of vector division are insufficient, prompting deeper inquiry into the mathematical foundations.
  • One participant explains that while square matrices can be "divided" through the use of inverses, this is contingent on the matrix being square and having a non-zero determinant.
  • Another participant introduces the idea that division can be defined in specific dimensions (1, 2, 4, and potentially 8 and 16) under certain conditions, referencing a theorem by Frobenius.
  • There is a discussion about the implications of "bad matrices" leading to contradictions when attempting to define division, likening it to dividing by zero.
  • One participant contrasts scalar multiplication with matrix multiplication, highlighting that left-multiplication by a matrix is not one-to-one, complicating the notion of division.
  • A participant mentions the programming language APL, which has specific notations for division and matrix operations, suggesting that different contexts may approach the concept of division differently.

Areas of Agreement / Disagreement

Participants express differing views on the conditions under which division might be applicable to vectors and matrices. While some agree on the limitations of division in general, others propose specific scenarios where it could be defined, indicating that the discussion remains unresolved.

Contextual Notes

The discussion highlights the complexities and nuances of defining division in the context of linear algebra, particularly concerning the properties of matrices and vectors. Limitations include the dependence on dimensionality and the nature of the matrices involved.

find_the_fun
Messages
147
Reaction score
0
I was reading my computer graphics textbook and under frequently asked questions one was "why is there no vector division?" and it said "it turns out there is no 'nice' way to divide vectors". That's not a very good explanation. Why is it that matrices can't be divided?
 
Physics news on Phys.org
You can sort of divide square matrices. Suppose you have $Ax=b$, where $A$ is a matrix and $x,b$ are vectors. Then you left multiply both sides by $A^{-1}$ (assuming it exists, which it will so long as $\det(A)\not=0$). Then you get $A^{-1}Ax=A^{-1}b$, and since $A^{-1}A=I$, then you have $Ix=A^{-1}b$, or $x=A^{-1}b$. It is sometimes possible (though not the most efficient method) to solve a linear system this way.

However, this sort of inverse only works with square matrices, because you need $A^{-1}A$ to be the right size. Since a vector (aside from 1 x 1 matrices, also known as numbers) is not a square matrix, you cannot do this kind of inversion. The key here is that you're trying to achieve some sort of multiplicative identity, $I$ in this case. You can't do that with non-square matrices like vectors.
 
The answer is: you CAN, but only in certain dimensions, under certain limited circumstances.

First of all, for "division" to even make sense, you need some kind of multiplication, first. And this multiplication has to be of the form:

vector times vector = same kind of vector.

It turns out that this is only possible in certain dimensions: 1,2,4 (and if you allow certain "strangenesses" 8 and 16). This is a very "deep" theorem, due to Frobenius, and requires a bit of high-powered algebra to prove.

Now matrices only have such a multiplication when they are nxn (otherwise we get:

matrix times matrix = matrix of different size, which turns out to matter).

However, it turns out we can have "bad matrices", like so:

$AB = 0$ where neither $A$ nor $B$ are the 0-matrix. For example:

$A = \begin{bmatrix}1&0\\0&0 \end{bmatrix}$

$B = \begin{bmatrix}0&0\\0&1 \end{bmatrix}$

Now suppose, just for the sake of argument, we had a matrix we could call:

$\dfrac{1}{A}$.

Such a matrix should satisfy:

$\dfrac{1}{A}A = I$, the identity matrix.

Then:

$B = IB = \left(\dfrac{1}{A}A\right)B = \dfrac{1}{A}(AB) = \dfrac{1}{A}0 = 0$

which is a contradiction, since $B \neq 0$

In other words, "dividing by such a matrix" is rather like dividing by zero, it leads to nonsense.

It turns out the the condtition:

$AB = 0, A,B \neq 0$

is equivalent to:

$Av = 0$ for some vector $v \neq 0$.

Let's see why this is important by comparing matrix multiplication with scalar multiplication:

If $rA = rB$, we have:

$\dfrac{1}{r}(rA) = \left(\dfrac{1}{r}r\right)A = 1A = A$

and also:

$\dfrac{1}{r}(rA) = \dfrac{1}{r}(rB) = \left(\dfrac{1}{r}r\right)B = 1B = B$

provided $r \neq 0$ (which is almost every scalar).

This allows us to conclude $A = B$, in other words, the assignment:

$A \to rA$ is one-to-one.

However, if we take matrices:

$RA = RB$ does NOT imply $A = B$, for example let

$R = \begin{bmatrix} 1&0\\0&0 \end{bmatrix}$

$A = \begin{bmatrix} 0&0\\0&1 \end{bmatrix}$

$B = \begin{bmatrix} 0&0\\0&2 \end{bmatrix}$

Then we see, $RA = RB = 0$, but clearly $A$ and $B$ are different matrices.

So "left-multiplication by a matrix" is no longer 1-1, we means we can't uniquely "undo" it (which is what, at its heart, "division" is: the "un-doing" of multiplication).

I hope this made sense to you.
 
Deveno said:
The answer is: you CAN, but only in certain dimensions, under certain limited circumstances.

...

vector times vector = same kind of vector.

It turns out that this is only possible in certain dimensions: 1,2,4 (and if you allow certain "strangenesses" 8 and 16).

...

In other words, "dividing by such a matrix" is rather like dividing by zero, it leads to nonsense.

...

So "left-multiplication by a matrix" is no longer 1-1, we means we can't uniquely "undo" it (which is what, at its heart, "division" is: the "un-doing" of multiplication).
And despite all this, one can divide by almost all square matrices of any dimension, and not just having 1, 2, 4, 8 or 16 components.
 
There's a programming language called APL, which was designed for applying math.

Regular division is denoted a÷b.
The same operator is used for the reciprocal: ÷b means 1/b.

Typically operations for matrices are denoted with a square block around the operator.
In particular the matrix inverse is ⌹B.
And matrix division is: A⌹B. This means A multiplied by the inverse of B.
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
8K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 47 ·
2
Replies
47
Views
6K
  • · Replies 16 ·
Replies
16
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
8K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K