in general, what exactly does it mean for a tensor to be non-degenerate? does it mean that the vector space underlying it all has a zero kernel? I'm still a bit hazy on the degeneracy of bilinear forms in general. They're not exactly like tensors, either, but I am guessing there's some kind of overlap?
Well, think about it. You said: Can you see anything strange in that question? In particular, does it even make sense to talk about the kernel of a vector space, or do you need to add extra structure in order to define a kernel? By the way, your question about non-degeneracy relates, very loosely speaking, to the invertibility of certain tensors. I'd explain more but, well, it's five A.M.
oh right right, I probably should have made some kind of a linear transformation to have a kernel. I s'pose it doesn't make much sense without that. :) In any case, I think it is starting to make sense... If we just look at the simplest case (a rank 2 tensor which is like a bilinear form) then non-degeneracy implies that the matrix of the bilinear form has non zero determinant, which implies that the bilinear map is invertible. (if it is a positive definite form I think that non-degeneracy is implied). I guess we'd care about it being invertible if we're looking at something like the metric, which we might use to raise and lower indices, but this requires an inverse metric. Is this more along the right lines of thought?
Yes, this does seem to be along the right lines of thought. A Tensor can be thought of as a map or transformation. For example, a bilinear form (or matrix) maps vectors to vectors. A noninvertible bilinear form is degenerate because this map is not one to one, or maps distinct values (for instance vectors) onto the same value (or vector). So why are degenerate tensors sometimes a problem? One reason for this is that tensors are often used to define and then manipulate equations. When a degenerate tensor used for multiplication in the algebraic manipulations, one tends to introduce spurious solutions. An analogous problem occurs in simple algebraic problems. For example, take the equation function [tex]g(x)=x/5-1[/tex] with [tex]g(x)=0[/tex]. We can multiply both side of the equation [tex]g(x)=0[/tex] by some function [tex]f(x)[/tex]. We will call a function nondegenerate if it is nonzero at all values of x. For example, the function [tex]f(x)=5[/tex]. This gives [tex]f(x) g(x)=x-5=0[/tex]. However, take the function [tex]f(x)=5(x-3)[/tex]. This gives [tex]f(x) g(x)=(x-3)(x-5)=0[/tex]. Of course [tex]x=3[/tex] is not a solution of [tex]g(x)=0[/tex]. Of course the problem is that [tex]f(x)=0[/tex] at [tex]x=3[/tex]. No matter what the value is on either side of the equation, it is mapped to 0. The function [tex]f(x)=5(x-3)[/tex] is, of course, not invertible at x=3 (i.e. [tex]1/(5(x-3))[/tex] becomes singular at [tex]x=3[/tex]). This is a rather trivial example, but it illustrates why a noninvertible transformation is a problem. Although a nontrivial nullspace of a bilinear form may be more complicated, one runs into similar problems when it is used in algebraic manipulations - spurious results can be introduced because distinct values (say vectors) can be projected onto the same value (vector) when a degenerate operator is used. There are other reasons why degeneracy proves imporant to identify, but hope this example proves useful in understanding why degeneracy can be an issue.
I had an additional question here, hope someone can suggest: This is a HW I had many years back, that I was never able to answer: I am trying to show that a bilinear map B(x,y) in a fin.dim inner-prod. space (V,<,>)is non-degenerate iff the representing matrix is skew-symmetric ( for the matrix M, choose a basis {V1,..,Vn} for V, and then m_ij=( B(Vi,Vj)) , i.e, the ij-th entry is the form evaluated at Vi and Vj . The only thing I could work think of is using the result that , in a fin. dim inner-prod. space , every functional can be written as (let's just assume a functional on VxV to keep it simple): B((x1,x2))=<(x1,x2),(y1,y2)> for fixed (y1,y2) in VxV . Then I thought of using induction and cofactor expansion of the determinant set to zero, i.e, try to show what happens for n=2, etc. For n=2, we get: Determinant is B(V1,V1)B(V2,V2)-B(V1,V2)B(V2,V1) Clear that of B(Vi,Vj)=-B(Vj,Vi) , i.e, with skew-symmetry, determinant is non-zero for B=/0 , but I don't see how this is necessary. Any ideas?. Thanks for any help.
A Tiny Introduction to Cayley-Dickson Algebras A few concrete examples should help positive definite bilinear form (euclidean plane [itex]E^2[/itex]; cos law from elliptic trig): [tex] \left[ \begin{array}{cc} t_1 & x_1 \end{array} \right] \; \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] \; \left[ \begin{array}{c} t_2 \\ x_2 \end{array} \right] = t_1 \, t_2 + x_1 \, x_2 [/tex] degenerate bilinear form (galilean plane [itex]E^{1,0}[/itex]; cosg law from parabolic trig): [tex] \left[ \begin{array}{cc} t_1 & x_1 \end{array} \right] \; \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right] \; \left[ \begin{array}{c} t_2 \\ x_2 \end{array} \right] = t_1 \, t_2 [/tex] indefinite nondegenerate bilinear form (minkowskian plane [itex]E^{1,1}[/itex]; cosh law from hyperbolic trig): [tex] \left[ \begin{array}{cc} t_1 & x_1 \end{array} \right] \; \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right] \; \left[ \begin{array}{c} t_2 \\ x_2 \end{array} \right] = t_1 \, t_2 - x_1 \, x_2 [/tex] Typical elements of the groups of linear transformations (matrices act from the left on column vectors) which preserve these respective forms: rotations [itex]SO(2)[/itex]: [tex] \frac{1}{\sqrt{1+v^2}} \; \left[ \begin{array}{cc} 1 & v \\ -v & 1 \end{array} \right], \; -\infty < v < \infty [/tex] shears [itex]SO(1,0)[/itex]: [tex] \left[ \begin{array}{cc} 1 & 0 \\ v & 1 \end{array} \right], \; -\infty < v < \infty [/tex] boosts [itex]SO(1,1)[/itex]: [tex] \frac{1}{\sqrt{1-v^2}} \; \left[ \begin{array}{cc} 1 & v \\ v & 1 \end{array} \right], \; -1 < v < 1 [/tex] Notable geometric features: in euclidean plane interpreted as "spacetime", curves with initially "future-pointing: tangents can "turn around in time" so that their tangents are "past-pointing", in galilean plane, horizonal lines are null, and there is a "universal time" which works the same way for all inertial observers independent of their state of motion, in minkowskian plane, lines with slope [itex]\pm 1[/itex] are null Exercise: formulate these in a uniform way using a generalization of complex numbers [itex]t + x \, \epsilon[/itex] in which [itex]\epsilon^2 = -1,0,1[/itex] respectively. (A real linear algebra with a multiplicative norm induced by a given bilinear form is a Cayley-Dickson algebra, and these are very rare.) Can you figure out notions analogous to "high school trig" in the second two cases? How about "holomorphic differentiation"? The Cauchy-Riemann equations? Orthogonal matrices? How about analogues of Cauchy's integral theorem? How are zero divisors in the respective algebras related to the null lines? Can you find an appropriate notion of path curvature in the second two cases? What are the curves of constant path curvature? If you know about symmetry groups of systems of differential equations, what are the point symmetry groups of the equations of constant path curvature? (Hint: what is their dimension, as real Lie groups?) What are the fundamental invariants of these groups? HTH