jostpuur
- 2,112
- 19
This is the problem: Suppose [itex]A\in\mathbb{R}^{n\times k}[/itex] is some matrix such that its vertical rows are linearly independent, and [itex]1\leq k\leq n[/itex]. I want to find a matrix [itex]B\in\mathbb{R}^{m\times n}[/itex] such that [itex]n-k\leq m\leq n[/itex], its elements can nicely be computed from the [itex]A[/itex], and such that the horizontal rows of [itex]B[/itex] span the orthogonal complement of [itex]k[/itex]-dimensional space spanned by the vertical rows of [itex]A[/itex]. This means that
[tex] BA = 0 \in\mathbb{R}^{m\times k}[/tex]
too.
There always exists a [itex]B\in\mathbb{R}^{(n-k)\times n}[/itex] such that [itex]BA=0[/itex], and such that the horizontal rows of [itex]B[/itex] are linearly independent, but its elements don't seem to always have a nice formulas.
Sorry, my problem is not well defined, because I don't know what "computing nicely" means. But this is interesting problem anyway, I insist.
Easy example: [itex]n=2, k=1[/itex]. Set
[tex] B = (-A_{21}, A_{11}).[/tex]
Then
[tex] B\left(\begin{array}{c}<br /> A_{11} \\ A_{21} \\<br /> \end{array}\right) = 0[/tex]
Another easy example: [itex]n=3, k=2[/itex]. Set
[tex] B = (A_{21}A_{32} - A_{31}A_{22},\; A_{31}A_{12} - A_{11}A_{32},\; A_{11}A_{22} - A_{21}A_{12}).[/tex]
Then
[tex] B\left(\begin{array}{cc}<br /> A_{11} & A_{12} \\<br /> A_{21} & A_{22} \\<br /> A_{31} & A_{32} \\<br /> \end{array}\right) = (0,0)[/tex]
A difficult example! [itex]n=3, k=1[/itex]. What do you do now? We would like to get this:
[tex] \left(\begin{array}{ccc}<br /> B_{11} & B_{12} & B_{13} \\<br /> B_{21} & B_{22} & B_{23} \\<br /> \end{array}\right)<br /> \left(\begin{array}{c}<br /> A_{11} \\ A_{21} \\ A_{31} \\<br /> \end{array}\right)<br /> = \left(\begin{array}{c}<br /> 0 \\ 0 \\<br /> \end{array}\right)[/tex]
If you find formulas for elements of [itex]B[/itex], which are not awfully complicated, I'll be surprised.
Here's an interesting matrix:
[tex] B = \left(\begin{array}{ccc}<br /> A_{21} & -A_{11} & 0 \\<br /> 0 & A_{31} & -A_{21} \\<br /> -A_{31} & 0 & A_{11} \\<br /> \end{array}\right)[/tex]
This matrix has the property, that its three horizontal rows always span the two dimensional orthogonal complement of [itex]A_{*1}[/itex]. It can happen in two different ways. It can be that all [itex]B_{i*}[/itex] are non-zero, and they are linearly dependent, or it can be, that one of the [itex]B_{i*}[/itex] is zero, and two other ones are linearly independent.
That's an interesting remark! It is difficult to come up with a formula for two vectors that would span the two dimensional orthogonal complement, but it is easy to come up with a formula for three vectors that span the two dimensional orthogonal complement!
What happens with larger matrices [itex]A[/itex]? Are we going to get some interesting function [itex](n,k)\mapsto m(n,k)[/itex] that tells how many vectors we need to span the [itex]n-k[/itex]-dimensional complement "nicely"?
[tex] BA = 0 \in\mathbb{R}^{m\times k}[/tex]
too.
There always exists a [itex]B\in\mathbb{R}^{(n-k)\times n}[/itex] such that [itex]BA=0[/itex], and such that the horizontal rows of [itex]B[/itex] are linearly independent, but its elements don't seem to always have a nice formulas.
Sorry, my problem is not well defined, because I don't know what "computing nicely" means. But this is interesting problem anyway, I insist.
Easy example: [itex]n=2, k=1[/itex]. Set
[tex] B = (-A_{21}, A_{11}).[/tex]
Then
[tex] B\left(\begin{array}{c}<br /> A_{11} \\ A_{21} \\<br /> \end{array}\right) = 0[/tex]
Another easy example: [itex]n=3, k=2[/itex]. Set
[tex] B = (A_{21}A_{32} - A_{31}A_{22},\; A_{31}A_{12} - A_{11}A_{32},\; A_{11}A_{22} - A_{21}A_{12}).[/tex]
Then
[tex] B\left(\begin{array}{cc}<br /> A_{11} & A_{12} \\<br /> A_{21} & A_{22} \\<br /> A_{31} & A_{32} \\<br /> \end{array}\right) = (0,0)[/tex]
A difficult example! [itex]n=3, k=1[/itex]. What do you do now? We would like to get this:
[tex] \left(\begin{array}{ccc}<br /> B_{11} & B_{12} & B_{13} \\<br /> B_{21} & B_{22} & B_{23} \\<br /> \end{array}\right)<br /> \left(\begin{array}{c}<br /> A_{11} \\ A_{21} \\ A_{31} \\<br /> \end{array}\right)<br /> = \left(\begin{array}{c}<br /> 0 \\ 0 \\<br /> \end{array}\right)[/tex]
If you find formulas for elements of [itex]B[/itex], which are not awfully complicated, I'll be surprised.
Here's an interesting matrix:
[tex] B = \left(\begin{array}{ccc}<br /> A_{21} & -A_{11} & 0 \\<br /> 0 & A_{31} & -A_{21} \\<br /> -A_{31} & 0 & A_{11} \\<br /> \end{array}\right)[/tex]
This matrix has the property, that its three horizontal rows always span the two dimensional orthogonal complement of [itex]A_{*1}[/itex]. It can happen in two different ways. It can be that all [itex]B_{i*}[/itex] are non-zero, and they are linearly dependent, or it can be, that one of the [itex]B_{i*}[/itex] is zero, and two other ones are linearly independent.
That's an interesting remark! It is difficult to come up with a formula for two vectors that would span the two dimensional orthogonal complement, but it is easy to come up with a formula for three vectors that span the two dimensional orthogonal complement!
What happens with larger matrices [itex]A[/itex]? Are we going to get some interesting function [itex](n,k)\mapsto m(n,k)[/itex] that tells how many vectors we need to span the [itex]n-k[/itex]-dimensional complement "nicely"?
Last edited: