The answer to your other question, as to what can be said when the composition Q = DoA is identically zero, is that then of course Q only gives you the lower bound of zero for rank A(p), which is uninformative. But since the locus of matrices of corank ≥ 2 is defined by the vanishing D and also of all the first partials of D you can still obtain information from the derivatives, not of Q, but of D. E.g. if you have an indexed curve A(p) of matrices for which Q(p) = D(A(p)) is identically zero, but not all partials of D vanish at A(p), then the rank of A(p) is exactly n-1.
In general matrices of corank k are points of "multiplicity k" on the hypersurface S of corank ≥ 1 matrices, so the rank can be computed precisely using partial derivatives of D.
Example, n=3:
Since the determinant is a homogeneous polynomial we may view the non zero matrices in the locus S (those of rank 1 or 2) projectively, as a 7 dimensional hypersurface of degree 3, in projective 8 space. (Here we are equating non - zero matrices which are scalar multiples of each other, since the homogeneous determinant and its derivatives vanish at one ≠0 matrix A iff they vanish at all its scalar multiples tA.)
The sub locus of (equivalence classes of) 3x3 matrices of rank 1, is the 4 dimensional image of the Segre embedding of P^2 x P^2, the product of projective 2 space with itself, (since every 3x3 rank 1 matrix is a product of a 3x1 matrix with a 1x3 matrix). This 4 dim'l locus seems to have degree 6 in P^8, and comprises the locus of "double points" of the hypersurface S.
Thus a general one dimensional (projective) family of (non - zero) 3x3 matrices would have at least 3 matrices of rank ≤ 2, but none of rank 1. Indeed, since the locus of rank 1 matrices has codimension 4, even a 3 dim'l general family of 3x3 matrices would have none of rank 1.
For any indexed curve A(p) of 3x3 matrices, a non zero matrix which is a simple root of the determinant polynomial Q(p) will have rank 2. If we have a multiple root at p, that means the derivative of Q = D(A(p)) also vanishes there. But to conclude we have a matrix of rank one, you must evaluate not the derivative of the composition D(A(p)), but the composition of A with (all) the partial derivatives of D. I.e. you have to check that the index curve passes through the "singular" points of S, and is not merely tangent to the smooth points of S.
If A(0) is a 3x3 matrix of rank 1, then a general line A(R) of matrices through A(0), will give rise to a cubic determinant polynomial Q(p) = D(A(p)) with a double root at p=0, and a 3rd root elsewhere at some matrix of rank 2. But if the line also lies in the (quadratic) tangent cone to S at A(0), then the root p=0 becomes a triple root of Q. I.e. as the line A(R), passing through A(0), moves to also become tangent to S at A(0), the third intersection point with S, i.e. the 3rd root of Q, moves to coincide also with A(0).
Note that there are apparently two ways to compute the rank of a matrix; i.e. one can use either the vanishing of minor determinants, or the vanishing of partial derivatives of the original determinant polynomial D. But in fact these are not different! I.e. by the LaGrange expansion formula for the degree n determinant polynomial D, its first partials are actually equal to its (n-1)x(n-1) minor determinants. E.g. if n=3 and we letter the matrix rows from left to right, abc, def, ghi, then D = aei - afh -bdi +bfg + cdh - ceg, and the partial wrt a for instance is just ei-fh, one of the 2x2 minors.
So an nxn matrix has rank < n-k iff all (n-k)x(n-k) minors vanish, and therefore iff all partials of D of order k vanish. (Because of "Euler's theorem" the fact that D is a homogeneous polynomial means that the vanishing of all the kth partials implies also the vanishing of D and all its lower order partials.)
To summarize the answer your question, examining the order of a root of the determinant polynomial Q(p) = D(A(p)), i.e. the number of its derivatives that vanish, can only give you a lower bound for the rank. The exact rank is determined by the number of vanishing partials of D. Composing D with A to get Q can introduce more vanishing derivatives, due to either vanishing derivatives of A, or of orthogonal relations between the derivatives of D and A. However, if A(p) is a
general one dimensional indexed family of matrices, then the special phenomena that raise the order of the root will not occur, and the multiplicity of the root of Q
will exactly equal the corank of the matrix, as you have noticed in practice.
As illustration, note the second matrix example in post #3 by
@pasmith has a matrix containing p^2, forcing the derivative of A to be zero at p=0.
The fourth example has velocity vector (i.e. the derivative of A) which is non- zero, but orthogonal to the gradient of D at p = 0.
I.e. D = ad-bc, so gradD = (d,-c,-b,a), which equals (0,0,-1,0) at p=0, for the matrix (a,b,c,d) = (p,1,0,p). And this gradient (0,0,-1,0), is orthogonal to the velocity vector at p=0 of (p,1,0,p), i.e. to (1,1,0,1).
[Note these two examples illustrate the
only ways your conjecture can fail; i.e. either the velocity vector A' = 0, or the velocity vector of A is tangent to S.]
The fact that the gradient of D itself is not zero in the fourth example, tells us the matrix has rank 1, even though the chain rule forces the derivative of the polynomial Q(p) = p^2, to be zero, i.e. to have a double root at p =0.
And I must say I think you are very observant in noticing the phenomena that led you to pose this question. I have enjoyed thinking about it, and your question has helped me to learn more about it than I knew before.