I Matrix rank in terms of determinant polynomial

mupe03
Messages
4
Reaction score
0
TL;DR Summary
Can real analysis of matrix determinant tell us about the rank of that matrix?
A matrix nxn with a parameter p is given and the question is what is the rank of that matrix in terms of p, the gaussian elimination is the standard process and i know how to do it.

But i was wondering if the determinant of a matrix tells us if the matrix has independent columns thus telling us when the rank is equal to n, if i find the determinant of the matrix in form of a polynomial Q(p) and use real analysis to determine the roots i can find when the rank drops from n to n-1 but it gets harder to see when the rank drops to n-2 (which one of roots does)

so far i've got a glimpse of an idea that the degree of the root of Q(p) tells us how much the rank drops (for r degree the rank drops to n-r) but all of this seems suspicious to me i dont know whether its just a coincidence, also this method breaks completely if the determinant is 0 to begin with, then the only information i have is that rank is less than n but where does it drop to lower i cant determine, if anyone can help thank you a lot.
 
Last edited by a moderator:
Physics news on Phys.org
I feel like more information on what the matrix is and how it depends on p is necessary.
 
mupe03 said:
TL;DR Summary: Can real analysis of matrix determinant tell us about the rank of that matrix?

A matrix nxn with a parameter p is given and the question is what is the rank of that matrix in terms of p, the gaussian elimination is the standard process and i know how to do it.

But i was wondering if the determinant of a matrix tells us if the matrix has independent columns thus telling us when the rank is equal to n, if i find the determinant of the matrix in form of a polynomial Q(p) and use real analysis to determine the roots i can find when the rank drops from n to n-1 but it gets harder to see when the rank drops to n-2 (which one of roots does)

so far i've got a glimpse of an idea that the degree of the root of Q(p) tells us how much the rank drops (for r degree the rank drops to n-r) but all of this seems suspicious to me i dont know whether its just a coincidence, also this method breaks completely if the determinant is 0 to begin with, then the only information i have is that rank is less than n but where does it drop to lower i cant determine, if anyone can help thank you a lot.

Consider the matrices <br /> \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; p \end{pmatrix} and <br /> \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; p^{2} \end{pmatrix}. Here both matrices have rank 2 except that at p = 0 they have rank 1, but Q(p) = p for the first and Q(p) = p^2 for the second.

Also consider \begin{pmatrix} p &amp; 0 \\ 0 &amp; p \end{pmatrix} and \begin{pmatrix} p&amp; 1 \\ 0 &amp; p \end{pmatrix}. Here both have Q(p) = p^2, but at p = 0 the first has rank 0 and the second has rank 1. So I don't think that considering Q(p) alone will give you anything useful.

The rank of A(p) is equal to n - \dim \ker A(p). It is thus necessary to determine the dimension of the kernel. A starting point is to calculate the characteristic polynomial \chi_{A(p)}(\lambda) = \det (A(p) - \lambda I) = a_0(p) + a_1(p)\lambda + \dots + a_{n}(p)\lambda^n with a_0 being your Q. If \chi_{A(p)} has zero as a root with multiplicity j \geq 1 then \dim \ker A(p)^j = j. However we can only conclude from this that 1 \leq \dim \ker A(p) \leq j; we would still require some further means to fix the dimension precisely.
 
pasmith said:
Consider the matrices <br /> \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; p \end{pmatrix} and <br /> \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; p^{2} \end{pmatrix}. Here both matrices have rank 2 except that at p = 0 they have rank 1, but Q(p) = p for the first and Q(p) = p^2 for the second.

Also consider \begin{pmatrix} p &amp; 0 \\ 0 &amp; p \end{pmatrix} and \begin{pmatrix} p&amp; 1 \\ 0 &amp; p \end{pmatrix}. Here both have Q(p) = p^2, but at p = 0 the first has rank 0 and the second has rank 1. So I don't think that considering Q(p) alone will give you anything useful.

The rank of A(p) is equal to n - \dim \ker A(p). It is thus necessary to determine the dimension of the kernel. A starting point is to calculate the characteristic polynomial \chi_{A(p)}(\lambda) = \det (A(p) - \lambda I) = a_0(p) + a_1(p)\lambda + \dots + a_{n}(p)\lambda^n with a_0 being your Q. If \chi_{A(p)} has zero as a root with multiplicity j \geq 1 then \dim \ker A(p)^j = j. However we can only conclude from this that 1 \leq \dim \ker A(p) \leq j; we would still require some further means to fix the dimension precisely.
And that means its overcomplicating and i should just use the gaussian method?
 
pasmith said:
Consider the matrices <br /> \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; p \end{pmatrix} and <br /> \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; p^{2} \end{pmatrix}. Here both matrices have rank 2 except that at p = 0 they have rank 1, but Q(p) = p for the first and Q(p) = p^2 for the second.

Also consider \begin{pmatrix} p &amp; 0 \\ 0 &amp; p \end{pmatrix} and \begin{pmatrix} p&amp; 1 \\ 0 &amp; p \end{pmatrix}. Here both have Q(p) = p^2, but at p = 0 the first has rank 0 and the second has rank 1. So I don't think that considering Q(p) alone will give you anything useful.

The rank of A(p) is equal to n - \dim \ker A(p). It is thus necessary to determine the dimension of the kernel. A starting point is to calculate the characteristic polynomial \chi_{A(p)}(\lambda) = \det (A(p) - \lambda I) = a_0(p) + a_1(p)\lambda + \dots + a_{n}(p)\lambda^n with a_0 being your Q. If \chi_{A(p)} has zero as a root with multiplicity j \geq 1 then \dim \ker A(p)^j = j. However we can only conclude from this that 1 \leq \dim \ker A(p) \leq j; we would still require some further means to fix the dimension precisely.
Can you give an example of how this what you provided could be used ?
 
@mupe03: There is indeed a relation between the rank of the matrix A(p) and the multiplicity of the root of the determinant polynomial Q(p). Basically the multiplicity of the root is at least as great as the corank = (n -rank) of the matrix. I.e. a simple root can only occur if the rank is n-1, and if the rank is n-2 then the root must have multiplicity at least 2,..., if the rank is n-k then the root must have multiplicity at least k. So the multiplicity of the root of Q is always an upper bound for the corank.
But as shown in the examples above, the root can have multiplicity higher than the corank. This is caused by the way the indexed curve of matrices meets the locus of singular matrices. I.e. the higher the order of contact of the index curve with the locus S of singular matrices, the more the multiplicity of the root can overestimate the corank. In the worst case, when the index curve lies entirely in the locus S, i.e. infinite order of contact, the polynomial Q(p) is identically zero. Note the polynomial Q is a composition Q = DoA, of the general determinant polynomial of n^2 variables, with the indexing function A. The rank is determined by order of vanishing of the partial derivatives of D, which are in general over- approximated by the order of vanishing of the derivatives of the composition Q.

In the example above (post #3) of a 2x2 matrix of corank one which occurs at a double root of Q, the indexed curve of matrices is tangent to the locus of singular matrices at the given matrix, so the rank one matrix is being counted twice in the family. To see this, note that the determinant D = ad-bc of a 2x2 matrix is quadratic, so the locus of singular matrices ad-bc = 0 has degree two and should meet the given line of matrices twice, but the given family A(p) of Jordan form matrices with p's on the diagonal and 1 in upper right corner, only meets the singular matrices at the one point where p = 0, so that is a double intersection. You can also check that the gradient of the determinant D is perpendicular to the velocity vector of this line of matrices at the given point p = 0.

In the space M of nxn matrices, the locus S of matrices of corank ≥ 1, is a hypersurface of degree n defined by the vanishing of the determinant polynomial D. If a matrix A has corank ≥ 2, (i.e. rank ≤ n-2), then not only does the polynomial D vanish at A, but also all its first partials vanish at A. This forces the matrix to occur at a root of multiplicity at least 2 for Q(p) (= D(A(p)), the composition of the polynomial D and the indexing function A(p)). Similarly, at a matrix of corank k, all partials of the determinant polynomial D vanish up to order k-1, forcing the polynomial Q(p) to have a root of multiplicity at least k at A. But the chain rule for derivatives implies that the order of the root of Q(p) is also influenced by the direction of the tangent vector to the curve A(p) in the space of matrices, or whether that tangent vector is zero, so the multiplicity of the root of Q(p) may be higher.

I.e. the indexed curve of matrices is a map A:R-->R^(nxn), the determinant is a polynomial map D:R^(nxn)-->R, and the determinant polynomial Q is their composition Q = (DoA):(R-->R(nxn)-->R). The locus S of matrices of corank ≥ 1 is the zero set of D in R^(nxn). The multiplicity of the root of Q at p is the intersection multiplicity of the image curve A(R) with the hypersurface S at the point A(p). This multiplicity is at least as great as the product of its multiplicity as a point on S, with its multiplicity as a point of the curve A(R).

These multiplicities are computed from the number of vanishing derivatives of the function D and the map A. But the intersection multiplicity can be greater e.g. if the velocity vector of A lies in the tangent cone of S. A simple root of Q(p) occurs only at a point which is smooth on the hypersurface S (i.e. where gradD ≠ 0) and where the velocity vector of A is both non - zero and transverse to S (i.e. not perpendicular to gradD). [This is meant to expand on what is mentioned in post #2 by @Office_Shredder about the dependence of the matrix on p, i.e. you need to know not just D, but also the indexing map A.]
 
Last edited:
The answer to your other question, as to what can be said when the composition Q = DoA is identically zero, is that then of course Q only gives you the lower bound of zero for rank A(p), which is uninformative. But since the locus of matrices of corank ≥ 2 is defined by the vanishing D and also of all the first partials of D you can still obtain information from the derivatives, not of Q, but of D. E.g. if you have an indexed curve A(p) of matrices for which Q(p) = D(A(p)) is identically zero, but not all partials of D vanish at A(p), then the rank of A(p) is exactly n-1.

In general matrices of corank k are points of "multiplicity k" on the hypersurface S of corank ≥ 1 matrices, so the rank can be computed precisely using partial derivatives of D.

Example, n=3:
Since the determinant is a homogeneous polynomial we may view the non zero matrices in the locus S (those of rank 1 or 2) projectively, as a 7 dimensional hypersurface of degree 3, in projective 8 space. (Here we are equating non - zero matrices which are scalar multiples of each other, since the homogeneous determinant and its derivatives vanish at one ≠0 matrix A iff they vanish at all its scalar multiples tA.)
The sub locus of (equivalence classes of) 3x3 matrices of rank 1, is the 4 dimensional image of the Segre embedding of P^2 x P^2, the product of projective 2 space with itself, (since every 3x3 rank 1 matrix is a product of a 3x1 matrix with a 1x3 matrix). This 4 dim'l locus seems to have degree 6 in P^8, and comprises the locus of "double points" of the hypersurface S.

Thus a general one dimensional (projective) family of (non - zero) 3x3 matrices would have at least 3 matrices of rank ≤ 2, but none of rank 1. Indeed, since the locus of rank 1 matrices has codimension 4, even a 3 dim'l general family of 3x3 matrices would have none of rank 1.
For any indexed curve A(p) of 3x3 matrices, a non zero matrix which is a simple root of the determinant polynomial Q(p) will have rank 2. If we have a multiple root at p, that means the derivative of Q = D(A(p)) also vanishes there. But to conclude we have a matrix of rank one, you must evaluate not the derivative of the composition D(A(p)), but the composition of A with (all) the partial derivatives of D. I.e. you have to check that the index curve passes through the "singular" points of S, and is not merely tangent to the smooth points of S.

If A(0) is a 3x3 matrix of rank 1, then a general line A(R) of matrices through A(0), will give rise to a cubic determinant polynomial Q(p) = D(A(p)) with a double root at p=0, and a 3rd root elsewhere at some matrix of rank 2. But if the line also lies in the (quadratic) tangent cone to S at A(0), then the root p=0 becomes a triple root of Q. I.e. as the line A(R), passing through A(0), moves to also become tangent to S at A(0), the third intersection point with S, i.e. the 3rd root of Q, moves to coincide also with A(0).

Note that there are apparently two ways to compute the rank of a matrix; i.e. one can use either the vanishing of minor determinants, or the vanishing of partial derivatives of the original determinant polynomial D. But in fact these are not different! I.e. by the LaGrange expansion formula for the degree n determinant polynomial D, its first partials are actually equal to its (n-1)x(n-1) minor determinants. E.g. if n=3 and we letter the matrix rows from left to right, abc, def, ghi, then D = aei - afh -bdi +bfg + cdh - ceg, and the partial wrt a for instance is just ei-fh, one of the 2x2 minors.

So an nxn matrix has rank < n-k iff all (n-k)x(n-k) minors vanish, and therefore iff all partials of D of order k vanish. (Because of "Euler's theorem" the fact that D is a homogeneous polynomial means that the vanishing of all the kth partials implies also the vanishing of D and all its lower order partials.)

To summarize the answer your question, examining the order of a root of the determinant polynomial Q(p) = D(A(p)), i.e. the number of its derivatives that vanish, can only give you a lower bound for the rank. The exact rank is determined by the number of vanishing partials of D. Composing D with A to get Q can introduce more vanishing derivatives, due to either vanishing derivatives of A, or of orthogonal relations between the derivatives of D and A. However, if A(p) is a general one dimensional indexed family of matrices, then the special phenomena that raise the order of the root will not occur, and the multiplicity of the root of Q will exactly equal the corank of the matrix, as you have noticed in practice.

As illustration, note the second matrix example in post #3 by @pasmith has a matrix containing p^2, forcing the derivative of A to be zero at p=0.
The fourth example has velocity vector (i.e. the derivative of A) which is non- zero, but orthogonal to the gradient of D at p = 0.
I.e. D = ad-bc, so gradD = (d,-c,-b,a), which equals (0,0,-1,0) at p=0, for the matrix (a,b,c,d) = (p,1,0,p). And this gradient (0,0,-1,0), is orthogonal to the velocity vector at p=0 of (p,1,0,p), i.e. to (1,1,0,1).
[Note these two examples illustrate the only ways your conjecture can fail; i.e. either the velocity vector A' = 0, or the velocity vector of A is tangent to S.]
The fact that the gradient of D itself is not zero in the fourth example, tells us the matrix has rank 1, even though the chain rule forces the derivative of the polynomial Q(p) = p^2, to be zero, i.e. to have a double root at p =0.

And I must say I think you are very observant in noticing the phenomena that led you to pose this question. I have enjoyed thinking about it, and your question has helped me to learn more about it than I knew before.
 
Last edited:
Summary:
If D:R^(nxn)—>R is the degree n determinant polynomial of an nxn matrix with variable coefficients, then a matrix A has corank > 0, iff the determinant D(A) = 0. The corank of A is > k iff the partial derivatives of D of order k are all zero.

An indexed one dimensional family of matrices is given by a map A:R—>R^(nxn) from the reals into the space of all nxn matrices.

Thus if we define the determinant polynomial of the family A as Q = DoA, then by the chain rule, the order of vanishing of the derivative of Q at a point p, i.e. the multiplicity of the root of Q at p, is greater than or equal to the corank of the matrix A(p).

The multiplicity of the root of Q at p is actually equal to the corank of A(p) unless either the derivative A’(p) =0 or the velocity vector A’(p) is tangent to the locus D=0 in R^(nxn).
 
Forgive me for extending this discussion, but I am slow to process this topic, and I am just appreciating the full insight of @pasmith's beautiful examples.

EDIT: The following paragraph is flawed, as noted after it.

E.g. an nxn matrix of corank k, i.e. rank n-k, can be taken to be an nxn matrix A with all zeroes except for a block identity matrix of dimension (n-k)x(n-k) in the upper right corner. Then a line of matrices with parameter p, passing through this matrix for p=0, may be considered to be this matrix, plus a block matrix of p times the rxr identity matrix in the lower left corner, with r ≤ n. In order for this line of matrices to intersect the locus det = 0 only a finite number of times, i.e. for the general matrix in this line to be invertible, we need r ≥ k. [Correction: In fact we need r = k or n.]

EDIT: After reading @pasmith's remarks below, I realize that although this works when r = k, and when r = n, it fails for every r with k < r < n. I.e. the matrix A(0) with all zeroes except for an identity block (n-k)x(n-k) in the upper right corner does have rank n-k, but the matrix A(p) obtained by adding in p times an identity rxr block in the lower right corner has determinant p^r, hence has rank n when p≠0 when r=k or n, but has determinant zero when k < r < n. I.e. in all intermediate cases, the line of matrices lies entirely in the locus D=0. So in the case r=k, the matrices A(p) is a line of minimal order of contact with the hypersurface D=0, and for r=n it has maximal finite order of contact, but for all intermediate cases it has infinite order of contact.

Thus an nxn matrix with all zeroes except for an identity (n-k)(n-k) matrix in the upper right corner, and p times a kxk identity matrix in the lower left corner, is a line of matrices passing through the matrix A for p=0, and having the lowest possible order of contact with the locus det = 0; i.e. the order of contact = (the order of the root p =0 of the determinant polynomial Q(p) = p^k); i.e. order of contact = k = the rank of A.

We can also take a line through A with maximal finite order of contact n to the hypersurface det = 0 at A. Namely, if k ≥1, we can take an nxn matrix with (n-k)x(n-k) identity matrix in the upper right corner, and with all p's on the main diagonal. Such a matrix will have determinant p^n, hence order of contact n with the hypersurface det = 0, and rank (n-k) when p=0.

I do not know how to choose, or if it is possible to choose, a line A(p) of matrices through A(0) with determinant = p^r, for any r with k < r < n.

But by choosing higher powers of p as entries on the diagonal, i.e. by making the curve (no longer a line) A(p) of matrices contain a higher order power of p, one can easily make the determinant have as high a multiplicity at p = 0 as desired. E.g. putting an (n-k)x(n-k) identity matrix in the upper right corner, and then putting p^(s+1) in the upper left corner, and all p's on the rest of the main diagonal, gives determinant = p^(n+s), and rank (n-k) when p=0.

EDIT: Also, by choosing the matrix A(0) carefully, and depending on r, it is possible for A(0) to have rank (n-k), and to pass a line A(p) of matrices through A(0) where det(A(p)) = Q(p) = p^r, for any r with k ≤ r ≤ n. This is done below following @pasmith's lead and using Jordan form matrices. The flawed example above tried to use the same A(0) for all such r, and the successful examples below allow changing A(0) to suit the specific r.

EDIT: By changing bases, one can actually use the same fixed choice of A(0) and vary the line A(p) through A(0) to get Q(p) = p^r for any k ≤ r ≤ n.
 
Last edited:
  • #10
mathwonk said:
Forgive me for extending this discussion, but I am slow to process this topic, and I am just appreciating the full insight of @pasmith's beautiful examples, which are in fact totally general.

E.g. an nxn matrix of corank k, i.e. rank n-k, can be taken to be an nxn matrix A with all zeroes except for a block identity matrix of dimension (n-k)x(n-k) in the upper right corner.

I think you want the block either in the upper left corner or the lower right corner; the non-zero entries must be on the main diagonal. But this is not the only way a matrix can have a given corank.

Consider <br /> A(p) = f(p)I + g(p)J where the entries of J are zero except on the superdiagonal, where J_{i,i+1} = a_i \in \{0,1\}, \qquad i = 1, \dots, n - 1. Then \det A(p) = f(p)^n. However at a point where f(p) = 0, its rank is <br /> \begin{cases}<br /> \sum_{i=1}^{n-1} a_i &amp; g(p) \neq 0 \\<br /> 0 &amp; g(p) = 0\end{cases} One complication in the analysis is that if A(p) is continuous, then it does not follow that the Jordan normal form of A(p) is continuous. For example <br /> A(p) = \begin{pmatrix} p &amp; 1 \\ 0 &amp; -p \end{pmatrix} where the upper right entry in the JNF is 0 if p \neq 0 and 1 if p = 0.
 
  • #11
@pasmith: Thank you for these useful comments. I am not quite sure I understand you, but there is certainly some error in my post. It is still true of course that an nxn matrix with all zeroes except for a block (n-k)x(n-k) identity matrix in the upper right (or any) corner, does have rank (n-k), since it has (n-k) independent rows, or since the block can be moved from upper right to upper left by invertible column operations which do not change the rank, or since it has a minor (n-k)x(n-k) determinant equal to 1. I.e. the statement of mine that you quote is true. Perhaps you meant that it is not useful for my purposes? (as indeed it was not.)

I.e. it is not true, as I claimed, that augmenting this matrix by p times an rxr identity block in the lower left corner yields a matrix with determinant p^r, for every r with k ≤ r ≤ n. In fact this works only for r=k and r=n, the latter of which was the case in your nice example 4, post #3. The other intermediate cases all have determinant 0. Perhaps this is what worried you.

And I agree that Jordan form can be useful, but I was not using it since I did not see how to use it to give a linear family A(p) (no higher powers of p), with rank n-k for p=0, and determinant p^r for k < r < n, ... (but I see it now).

In fact, it seems easy, using Jordan form, as you suggest, at least if I allow A(0) to vary with r . I.e. given r with 1 ≤ k ≤ r ≤ n, just take an nxn matrix with all zeroes except as follows: the first r entries on the main diagonal are all equal to p, with the rest of the n-r diagonal entries equal to 1. Then put (r-k) ones on the first super diagonal, just above some of the p's. This is possible since
(r-k) ≤ (r-1), and there are (r-1) p's with space above them.

Then the determinant equals p^r, and when p = 0, the matrix A(0) has rank
(r-k) + (n-r) = (n-k).

Does this seem ok?

Thank you!
 
Last edited:
  • #12
One small gotcha, everything in all that posts I think assumes the entries are polynomials in p but the original post only said the determinant was a polynomial and the matrix depends on p. For example if the matrix is just diagonal 2x2 with ##\sqrt{p}## on the diagonal the determinant is p but the rank drops from 2 to 0 at 0.

It actually didn't occur to me that the entries were polynomial and I thought we just had some crazy matrices where everything cancelled out nicely in the determinant so my observation is mostly just stupidity but maybe is also interesting.
 
  • #13
mathwonk said:
@pasmith: Thank you for these useful comments. I am not quite sure I understand you, but there is certainly some error in my post. It is still true of course that an nxn matrix with all zeroes except for a block (n-k)x(n-k) identity matrix in the upper right (or any) corner, does have rank (n-k), since it has (n-k) independent rows, or since the block can be moved from upper right to upper left by invertible column operations which do not change the rank, or since it has a minor (n-k)x(n-k) determinant equal to 1. I.e. the statement of mine that you quote is true. Perhaps you meant that it is not useful for my purposes? (as indeed it was not.)

I.e. it is not true, as I claimed, that augmenting this matrix by p times an rxr identity block in the lower left corner yields a matrix with determinant p^r, for every r with k ≤ r ≤ n. In fact this works only for r=k and r=n, the latter of which was the case in your nice example 4, post #3. The other intermediate cases all have determinant 0. Perhaps this is what worried you.

And I agree that Jordan form can be useful, but I was not using it since I did not see how to use it to give a linear family A(p) (no higher powers of p), with rank n-k for p=0, and determinant p^r for k < r < n, ... (but I see it now).

In fact, it seems easy, using Jordan form, as you suggest, at least if I allow A(0) to vary with r . I.e. given r with 1 ≤ k ≤ r ≤ n, just take an nxn matrix with all zeroes except as follows: the first r entries on the main diagonal are all equal to p, with the rest of the n-r diagonal entries equal to 1. Then put (r-k) ones on the first super diagonal, just above some of the p's. This is possible since
(r-k) ≤ (r-1), and there are (r-1) p's with space above them.

Then the determinant equals p^r, and when p = 0, the matrix A(0) has rank
(r-k) + (n-r) = (n-k).

Does this seem ok?

Thank you!
You have really tried your best man i appreciate it, but i cant seem to understand more than 60% of what you said it seems the rank of the root of our determinant does provide us with information about its rank but only if the matrix is one-dimensional indexed family of matrices, ive gotten a glimpse of what that means, but can you write a text that has a general aproach so we can work from it easier together and i could ask people i know in person
 
  • #14
I assumed you meant one variable dependence on p, since otherwise I don't know what you mean by the multiplicity of the root at p =0. I am assuming here that by "multiplicity of the root of Q(p)) at p =0", you mean the highest power p^r that divides Q(p), i.e. also equal to (one more than the) number of derivatives of Q(p), wrt p, that vanish at the root. (Since you speak about using real analysis, and multiplicity of roots, i.e. taking derivatives, I also do not expect indexing by functions like sqrt(p) that are not differentiable.)

But the most accurate information occurs when you have as variables all the entries aij of the matrix. I.e. det is a polynomial det(aij) in all n^2 variables aij. and the corank of a matrix is ≥ one exactly when that polynomial vanishes there. The corank is ≥ 2 exactly when the first partials of that polynomial also vanish, etc.... I.e. the corank is always one more than the order of vanishing of the partials of det(aij) wrt the variables aij.

We introduce more confusion, i.e. less precision, when we write the entries aij as functions of some other variables, like p. Then the derivatives of the functions aij(p) can increase the order of vanishing of the derivatives of the composed polynomial Q(p) = det(aij(p)). So, one plus the order of vanishing of the derivatives of det(aij) gives exactly the corank, but the composition Q(p) = det(aij(p)) can have derivatives vanishing to higher order than (corank-1).
Nonetheless, "statistically general" (differentiable) indexing functions aij(p) will not increase the order of vanishing of Q(p). Thus for "generally chosen" examples, your "glimpse of an idea", that "the degree of the root of Q(p) tells us how much the rank drops (for r degree the rank drops to n-r)", will be true.

That's all.

But since I don't know quite what you mean, I think it is time for me to hush and for you to show more examples of the phenomena you have found, i.e. more of your work.
 
Last edited:
  • #15
Ok, one more try. I will try to make this as concrete as possible. As I understand it, you are studying the determinant "polynomial” Q(p) of a family A(p) of nxn matrices indexed by p, and with A(0) having rank n-k when p = 0. We want to understand the relation between rank(A(0)) and the order of the root of Q(p) at p =0. As stated, this contains information only when Q(p) is not identically zero, so we consider that case here.

After change of bases I claim the matrix A(0) may be assumed to be diagonal with the first n-k entries equal to 1, and the remaining k entries all zeroes. So we want to modify this matrix by adding on a matrix B(p) indexed by p, and such that B(0) is the zero matrix. Since we want the determinant of A(p) = A(0) + B(p) to be a polynomial Q(p), it is natural to require the entries of B(p) to be polynomials.

If we want the polynomial Q(p) = det(A(p)), not to be identically zero, the simplest choice for B(p) is to be diagonal, and have all the first n-k diagonal entries equal to zero, and the remaining k diagonal entries each equal to p. Then A(p) is also diagonal, with its first n-k diagonal entries equal to 1 and the remaining k diagonal entries equal to p. Then Q(p) = p^k, has a root at p=0 of order k.

We can also make Q(p) have a root at p = 0 of order higher than k, indeed as high as we wish, by putting higher powers of p into some of the last k entries on the diagonal.

My claim is that you cannot choose B(p) to make Q(p) have a root at p = 0, of order lower than k, as long as you take B(p) to be any nxn matrix with polynomial entries in p, such that B(0) = 0. Try it and see.

Thus the information obtained from analyzing the roots of Q(p) = det(A(0)+B(p)), is that corank(A(0)) = minimal possible order of the root at p = 0 of Q(p), with the minimum taken over all polynomially indexed families B(p) with B(0) = 0, (or even just all families B(p) with homogeneous linear entries).
 
Last edited:
Back
Top