Dimension of an intersection between a random subspace and a fixed subspace

In summary: I dunno. Thanks for your help!The singular value decomposition of a matrix (SVD) might show the column span of the matrix. I'm not familiar enough with the theory to say for sure. In summary, In summary, the problem is easy to state, but I don't know the proper approach to tackle it.
  • #1
Petawa
7
0
I've been struggling with this problem for about two weeks. My supervisor is also stumped - though the problem is easy to state, I don't know the proper approach to tackle it. I'd appreciate any hints or advice.

Let V be an random k-dimensional vector subspace of ℝn, chosen uniformly over all possible k-dimensional subspaces. Let X[itex]\in[/itex]ℝn[itex]\times[/itex]n be a symmetric matrix whose column space is contained in V. Now I add constraints to X: given some pairs (i,j) such that [itex]1\leq i < j\leq n[/itex], I need Xij=0. The pairs are fixed and independent of V. How many of these zero constraints can I satisfy before (with high probability) the only solution is X=0?

I've found a sufficient condition for a non-zero solution to exist: the number of constraints q must satisfy q< k(k+1)/2. From simulations, I think it is also a necessary condition (with probability one), but I can't seem to show it. I'd appreciate any ideas on how I might proceed.

Proof of sufficient condition
Let MV be the space of symmetric n by n matrices whose column space is contained in V. An orthogonal basis for MV is {Q(ei ejT+ej eiT)QT : [itex]1\leq i \leq j \leq n[/itex]} where the columns of Q[itex]\in[/itex]ℝn[itex]\times[/itex]k form an orthonormal basis for V and ei are the standard basis vectors for ℝk, so dim(M_V)=k(k+1)/2.

Let MX be the space of symmetric n[itex]\times[/itex]n matrices that satisfy the q zero constraints. Now suppose no non-zero X satisfying the constraints exist: this implies MV[itex]\cap[/itex] MX={0}. Hence dim(MV+MX) = dim(MV)+dim(MX) = k(k+1)/2+n(n+1)/2-q. Since MV+MX is contained within the space of symmetric matrices, its dimension is bounded by n(n+1)/2. Thus "no non-zero X" implies q[itex]\geq[/itex]k(k+1)/2. The contrapositive implies there is a non-zero solution when q<k(k+1)/2.
 
Physics news on Phys.org
  • #2
I'm still struggling with this. It seems to be related to Grassmannian manifolds, which provide a distribution over all possible random subspaces. However I can't find any papers that address this problem. Perhaps I'm using the wrong terminology? Does anyone have alternative keywords or anything that might help?

Thanks again.
 
  • #3
I probably couldn't solve this problem if I understood it, but for the sake of someone who could, I think you should clarify it.

Petawa said:
Let X[itex]\in[/itex]ℝn[itex]\times[/itex]n be a symmetric matrix whose column space is contained in V.

How many of these zero constraints can I satisfy before (with high probability) the only solution is X=0?

I don't understand the use of "Let X be..." in combination with the request to find X as a solution to something. Is "high probability" any non-zero probability?

I'm also curious how you implement a uniform distribution over all k dimensional subspaces of real n-space.
 
  • #4
You're right, my choice of words is poor. X is a matrix lying in the space MV[itex]\cap[/itex]MX (using the notation from the proof in the original post). That is, X's column space is contained in V and X satisfies the zero constraints. When I said "a non-zero solution exists", I should have said "MV[itex]\cap[/itex]MX contains a non-zero element".

Is "high probability" any non-zero probability?
If my conjecture is right, P(MV[itex]\cap[/itex]MX contains a non-zero element) is 1 for q< k(k+1)/2 and 0 for q>= k(k+1)/2. However, a result with any non-zero probability would be equally useful.

I'm also curious how you implement a uniform distribution over all k dimensional subspaces of real n-space.
I had the exact same question when I started looking into this. The distribution has support on a so-called "Grassmann manifold", the theory of which seems very well developed. My favourite paper is this one - they use a nice graphical approach to relate random subspaces to random points on a sphere. Hopefully, P(MV[itex]\cap[/itex]MX contains a non-zero element) has a transition point from 1 to 0 at q= k(k+1)/2, so we can ignore the exact specification of the distribution over the Grassmannian. However, that might be the correct approach: I just haven't had any luck with it so far, it gets very complex very quickly.
 
  • #5
I don't know enough about this problem to make any suggestions. However, maybe if you answer some of my questions, you'll be struck by a blinding flash of inspiration.

I've always been impressed by the utility of the singular value decomposition of a matrix and the most natural interpetation of a matrix (to me) is to view it as defining a linear mapping. So what would this point of view say in the case of your symmetric matrix and how would the SVD exhibit the column span of the matrix?

( Another thing that currently puzzles me is whether sunjin09's observation in the thread https://www.physicsforums.com/showthread.php?p=3849082#post3849082 is something that "anybody can see" from the geometric interpretation of the SVD. I can't see it! A summary of that is on another forum: http://www.sosmath.com/CBB/viewtopic.php?f=5&t=57763
 

1. What is the dimension of an intersection between a random subspace and a fixed subspace?

The dimension of an intersection between a random subspace and a fixed subspace can vary depending on the specific subspaces involved. However, it will always be equal to or less than the dimension of the fixed subspace.

2. How is the dimension of an intersection between a random subspace and a fixed subspace calculated?

The dimension of an intersection between a random subspace and a fixed subspace can be calculated by finding the number of basis vectors that are common to both subspaces.

3. Is the dimension of an intersection between a random subspace and a fixed subspace always non-zero?

No, it is possible for the dimension of an intersection between a random subspace and a fixed subspace to be zero. This would occur if the two subspaces are completely orthogonal to each other.

4. How does the dimension of an intersection between a random subspace and a fixed subspace relate to linear independence?

If the dimension of the intersection between a random subspace and a fixed subspace is greater than zero, it means that there are at least some linearly dependent vectors in the two subspaces. If the dimension is zero, it means that the subspaces are linearly independent.

5. Can the dimension of an intersection between a random subspace and a fixed subspace be greater than the dimension of the random subspace?

No, the dimension of an intersection between a random subspace and a fixed subspace cannot be greater than the dimension of the random subspace. This is because the intersection is a subset of the random subspace, and therefore cannot have more dimensions than the random subspace itself.

Similar threads

  • Calculus and Beyond Homework Help
Replies
14
Views
583
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Math Proof Training and Practice
Replies
18
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
15
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
Back
Top