Dimension of an intersection between a random subspace and a fixed subspace

Petawa
Messages
7
Reaction score
0
I've been struggling with this problem for about two weeks. My supervisor is also stumped - though the problem is easy to state, I don't know the proper approach to tackle it. I'd appreciate any hints or advice.

Let V be an random k-dimensional vector subspace of ℝn, chosen uniformly over all possible k-dimensional subspaces. Let X\inℝn\timesn be a symmetric matrix whose column space is contained in V. Now I add constraints to X: given some pairs (i,j) such that 1\leq i < j\leq n, I need Xij=0. The pairs are fixed and independent of V. How many of these zero constraints can I satisfy before (with high probability) the only solution is X=0?

I've found a sufficient condition for a non-zero solution to exist: the number of constraints q must satisfy q< k(k+1)/2. From simulations, I think it is also a necessary condition (with probability one), but I can't seem to show it. I'd appreciate any ideas on how I might proceed.

Proof of sufficient condition
Let MV be the space of symmetric n by n matrices whose column space is contained in V. An orthogonal basis for MV is {Q(ei ejT+ej eiT)QT : 1\leq i \leq j \leq n} where the columns of Q\inℝn\timesk form an orthonormal basis for V and ei are the standard basis vectors for ℝk, so dim(M_V)=k(k+1)/2.

Let MX be the space of symmetric n\timesn matrices that satisfy the q zero constraints. Now suppose no non-zero X satisfying the constraints exist: this implies MV\cap MX={0}. Hence dim(MV+MX) = dim(MV)+dim(MX) = k(k+1)/2+n(n+1)/2-q. Since MV+MX is contained within the space of symmetric matrices, its dimension is bounded by n(n+1)/2. Thus "no non-zero X" implies q\geqk(k+1)/2. The contrapositive implies there is a non-zero solution when q<k(k+1)/2.
 
Physics news on Phys.org
I'm still struggling with this. It seems to be related to Grassmannian manifolds, which provide a distribution over all possible random subspaces. However I can't find any papers that address this problem. Perhaps I'm using the wrong terminology? Does anyone have alternative keywords or anything that might help?

Thanks again.
 
I probably couldn't solve this problem if I understood it, but for the sake of someone who could, I think you should clarify it.

Petawa said:
Let X\inℝn\timesn be a symmetric matrix whose column space is contained in V.

How many of these zero constraints can I satisfy before (with high probability) the only solution is X=0?

I don't understand the use of "Let X be..." in combination with the request to find X as a solution to something. Is "high probability" any non-zero probability?

I'm also curious how you implement a uniform distribution over all k dimensional subspaces of real n-space.
 
You're right, my choice of words is poor. X is a matrix lying in the space MV\capMX (using the notation from the proof in the original post). That is, X's column space is contained in V and X satisfies the zero constraints. When I said "a non-zero solution exists", I should have said "MV\capMX contains a non-zero element".

Is "high probability" any non-zero probability?
If my conjecture is right, P(MV\capMX contains a non-zero element) is 1 for q< k(k+1)/2 and 0 for q>= k(k+1)/2. However, a result with any non-zero probability would be equally useful.

I'm also curious how you implement a uniform distribution over all k dimensional subspaces of real n-space.
I had the exact same question when I started looking into this. The distribution has support on a so-called "Grassmann manifold", the theory of which seems very well developed. My favourite paper is this one - they use a nice graphical approach to relate random subspaces to random points on a sphere. Hopefully, P(MV\capMX contains a non-zero element) has a transition point from 1 to 0 at q= k(k+1)/2, so we can ignore the exact specification of the distribution over the Grassmannian. However, that might be the correct approach: I just haven't had any luck with it so far, it gets very complex very quickly.
 
I don't know enough about this problem to make any suggestions. However, maybe if you answer some of my questions, you'll be struck by a blinding flash of inspiration.

I've always been impressed by the utility of the singular value decomposition of a matrix and the most natural interpetation of a matrix (to me) is to view it as defining a linear mapping. So what would this point of view say in the case of your symmetric matrix and how would the SVD exhibit the column span of the matrix?

( Another thing that currently puzzles me is whether sunjin09's observation in the thread https://www.physicsforums.com/showthread.php?p=3849082#post3849082 is something that "anybody can see" from the geometric interpretation of the SVD. I can't see it! A summary of that is on another forum: http://www.sosmath.com/CBB/viewtopic.php?f=5&t=57763
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top