Can any matrix be expressed as the product of two vectors?

Click For Summary

Discussion Overview

The discussion revolves around whether any matrix can be expressed as the product of two vectors, specifically through the outer product. Participants explore the implications of this idea, including conditions under which it may or may not hold true, and the properties of matrices that arise from such products.

Discussion Character

  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that a matrix can be expressed as the outer product of two vectors, represented as M_ab = v_a × w_b, and question the conditions under which this holds.
  • Others argue that not all matrices can be represented this way, particularly noting that matrices of rank greater than one cannot be expressed as the product of two vectors.
  • A participant highlights that a matrix formed by the outer product of a column vector and a row vector has rank one, and thus cannot represent invertible matrices of rank two or higher.
  • Some participants discuss the concept of expressing any matrix as a sum of rank-1 matrices, suggesting that while individual matrices may not be expressible as a product of two vectors, they can be represented through linear combinations of such products.
  • There is a mention of the determinant of rank-one matrices being zero, indicating a limiting property of such matrices.
  • Further exploration includes the minimal length of linear combinations of outer products needed to represent bilinear mappings, with references to theoretical bounds on this length.

Areas of Agreement / Disagreement

Participants express differing views on the ability to represent matrices as products of two vectors. While some agree on the limitations imposed by matrix rank, others emphasize the broader applicability of linear combinations of outer products. The discussion remains unresolved regarding the generality of the claim.

Contextual Notes

Participants reference specific mathematical properties and implications of matrix rank and determinants, but do not resolve the underlying assumptions or definitions of vector products used in their arguments.

DuckAmuck
Messages
238
Reaction score
40
For example, does this always hold true?

M_ab = v_a × w_b

If not, where does it break down?
 
Physics news on Phys.org
Let's say there are a column vector ##A = \begin{bmatrix}a \\ b\end{bmatrix}## and row vector ##B = \begin{bmatrix}c & d\end{bmatrix}## that have

##AB = \begin{bmatrix}ac & ad \\ bc & bd\end{bmatrix} = \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}##.

Is this possible, knowing that ##xy = 0## for real numbers ##x,y## implies that either ##x=0## or ##y=0## ?
 
  • Like
Likes   Reactions: mfb and DuckAmuck
hilbert2 said:
Let's say there are a column vector ##A = \begin{bmatrix}a \\ b\end{bmatrix}## and row vector ##B = \begin{bmatrix}c & d\end{bmatrix}## that have

##AB = \begin{bmatrix}ac & ad \\ bc & bd\end{bmatrix} = \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}##.

Is this possible, knowing that ##xy = 0## for real numbers ##x,y## implies that either ##x=0## or ##y=0## ?

Thanks. I was trying to think of a counter example. This is very obvious.
 
No, such a matrix has rank 1. Using the notation that ##\mathbf w^T## is the row vector of ##\mathbf w##, then row 1 of M is ##v_1 \mathbf w^T##, row 2 is ##v_2 \mathbf w^T##, etc. Every row is a multiple of every other row. And every column is a multiple of every other column, as all have the form ##w_j \mathbf v##.

And in particular, any invertible matrix 2x2 or larger is going to be a counterexample.

What is true is that you can express any matrix M of rank n as a sum of n rank-1 matrices ##\sum_{i=1}^n {\mathbf v_i \mathbf w_i^T}##.
 
Last edited:
DuckAmuck said:
For example, does this always hold true?

M_ab = v_a × w_b

If not, where does it break down?
It can always be done by a linear combination of those where ##\operatorname{rank}M## is the minimal length of it.
 
  • Like
Likes   Reactions: Stephen Tashi
DuckAmuck said:
For example, does this always hold true?

M_ab = v_a × w_b

If not, where does it break down?

You didn't say how you define the "product of two vectors". Let's assume you are thinking of this type of example.
v_a = (a1, a2)
v_b = (b1,b,2,b3)
The "product" is a table of data with 2 rows and 3 columns. The (i,j) entry of the table is (a_i)(b_j).

It would be nice if all data tables were so simple! A person who could do multiplication wouldn't need the body of the table.

Thinking about the various complicated data tables that we encounter makes it clear that not all data tables (matrices) have such a simple structure.

However, as @fresh_42 indicates in post #6, all data tables can be written as linear combinations of such simple data tables. This is a remarkable and important fact. It's a concrete interpretation of the "singular value decomposition" of a matrix.
 
Also, I guess the determinant of this kind of matrices (if they're square) is always zero, as the columns are multiples of each other. This is a very limiting property for a matrix.
 
hilbert2 said:
Also, I guess the determinant of this kind of matrices (if they're square) is always zero, as the columns are multiples of each other. This is a very limiting property for a matrix.
Sure, they are rank one matrices.

The question gets interesting, if we ask for the minimal length of linear combinations of ##x\otimes y \otimes z## to represent a given bilinear mapping, e.g. matrix multiplication. If we define the matrix exponent ## \omega := \min\{\,\gamma\,|\,(A;B) \longmapsto A\cdot B = \sum_{i}^Rx_i(A) \otimes y_i(B) \otimes Z_i \,\wedge \, R=O(n^\gamma)\,\} ## then ##2\leq \omega \leq 2.3727## and we do not know how close we can come to the lower bound.

Two is the lower and three the trivial upper bound. I find this fascinating, as it somehow contains the question whether there are intrinsically difficult problems out there, or whether we just haven't found the right clue - similar to NP=P and ERH.
 
  • #10
fresh_42 said:
The question gets interesting, if we ask for the minimal length of linear combinations of ##x\otimes y \otimes z## to represent a given bilinear mapping, e.g. matrix multiplication. If we define the matrix exponent ## \omega := \min\{\,\gamma\,|\,(A;B) \longmapsto A\cdot B = \sum_{i}^Rx_i(A) \otimes y_i(B) \otimes Z_i \,\wedge \, R=O(n^\gamma)\,\} ## then ##2\leq \omega \leq 2.3727## and we do not know how close we can come to the lower bound.

Thanks, this was really an interesting topic based on a thesis I found with Google search, and doesn't require too many concepts unfamiliar for me to understand.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 33 ·
2
Replies
33
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K