Diagonalization of symmetric bilinear function

Click For Summary

Discussion Overview

The discussion revolves around the diagonalization of symmetric bilinear functions, specifically examining the implications of the duality principle and the conditions under which a bilinear form can be represented as a diagonalizable matrix. Participants explore examples and clarify terminology related to diagonalization and orthonormalization.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants assert that a bilinear function \(\theta:V\times V \rightarrow R\) can be represented as a diagonalizable matrix \(T\) with entries defined by \(\theta(\alpha_i,\alpha_j)\).
  • It is proposed that the values of \(\theta(\alpha_i,\alpha_i)\) can be 0, 1, or -1, with a specific example showing that \(\theta(\alpha_i,\alpha_i)=-1\) arises in certain contexts.
  • One participant introduces the idea that the real numbers are not algebraically closed, suggesting that diagonalization may yield different results in the complex numbers.
  • Another participant presents examples of bilinear forms, illustrating how orthogonal bases can be derived and how normalization affects the diagonalization process.
  • A question is raised regarding the distinction between "diagonalizable" and "orthonormalizable," with a focus on the mathematical operations involved in each process.

Areas of Agreement / Disagreement

Participants express differing views on the relationship between diagonalization and orthonormalization, indicating a lack of consensus on terminology and the implications of the examples provided.

Contextual Notes

There are unresolved questions regarding the definitions and processes of diagonalization versus orthonormalization, as well as the implications of using real versus complex numbers in these contexts.

yifli
Messages
68
Reaction score
0
According to duality principle, a bilinear function [itex]\theta:V\times V \rightarrow R[/itex] is equivalent to a linear mapping from V to its dual space V*, which can in turn be represented as a matrix T such that [itex]T(i,j)=\theta(\alpha_i,\alpha_j)[/itex]. And this matrix T is diagonalizable, i.e., [itex]\theta(\alpha_i,\alpha_i)=0,1,-1[/itex].

I don't understand how come [itex]\theta(\alpha_i,\alpha_i)=-1[/itex]
 
Physics news on Phys.org
Hi yifli! :smile:

yifli said:
According to duality principle, a bilinear function [itex]\theta:V\times V \rightarrow R[/itex] is equivalent to a linear mapping from V to its dual space V*, which can in turn be represented as a matrix T such that [itex]T(i,j)=\theta(\alpha_i,\alpha_j)[/itex]. And this matrix T is diagonalizable, i.e., [itex]\theta(\alpha_i,\alpha_i)=0,1,-1[/itex].

I don't understand how come [itex]\theta(\alpha_i,\alpha_i)=-1[/itex]

That -1 arises because the real numbers are not algebraically closed. In the complex numbers, we have that T is diagonalizable with 0 or 1 on the diagonal.

Basically, saying that theta is diagonalizable is equivalent to picking an orthogonal base for theta. Let's pick an example:

[tex]\theta(x,y)=xy[/tex]

This is a bilinear form and the following base is orthogonal: v=(5,0), w=(0,10). However, we can normalize this by doing:

[tex]\frac{v}{\sqrt{\theta(v,v)}}[/tex]

This yields the base (1,0), (0,1). So we have a diagonalizable matrix with 1's on the diagonal.

However, let's pick

[tex]\theta(x,y)=-xy[/tex]

this is a bilinear form. An orthogonal base for this is again v=(1,0), w=(0,1). However, for this we have

[tex]\theta(v,v)=-1[/tex]

So if we try to normalize this, we get

[tex]\frac{v}{\sqrt{\theta(v,v)}}=\frac{v}{\sqrt{-1}}[/tex]

but this cannot be in the real numbers. It is possible in the complex number, however, and this yields the orthonormal base (-i,0),(0,-i). The norms for this basis are 1, thus we get the matrix with 1's on the diagonal.
 
Last edited:
Uhm, are you guys confusing the terms "diagonalizable" and "orthonormalizable", or is it me, the one confused? When you diagonalize a matrix, don't you multiply on the left and right with a matrix and it's inverse? Instead, in orthonormalization, don't you multiply on the left and right with a matrix and it's transposed? (that's exactly what happens in Micromass's last example)
 
Petr Mugver said:
Uhm, are you guys confusing the terms "diagonalizable" and "orthonormalizable", or is it me, the one confused? When you diagonalize a matrix, don't you multiply on the left and right with a matrix and it's inverse? Instead, in orthonormalization, don't you multiply on the left and right with a matrix and it's transposed? (that's exactly what happens in Micromass's last example)

Indeed, I've edited my post. Sorry yifli!
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
8K
  • · Replies 4 ·
Replies
4
Views
2K