How Does the Graph Laplacian Explain the Multiplicity of Eigenvalue Zero?

  • Thread starter Thread starter GabrielN00
  • Start date Start date
  • Tags Tags
    Proof Work
Click For Summary
SUMMARY

The discussion focuses on proving that the multiplicity of the eigenvalue zero of the Laplacian matrix ##L_s## of a non-directed graph with non-negative weights corresponds to the number of convex components in the graph. The proof utilizes the identity ##\langle g, L_s g \rangle = \langle f, L f \rangle = \sum_{uv \in E} (f(u) - f(v))^2## to establish non-negativity of eigenvalues. Key points include the need for clarity on the definitions of the adjacency matrix ##A## and the diagonal matrix ##D##, as well as the relationship between eigenvalues and graph connectivity.

PREREQUISITES
  • Understanding of graph theory concepts, particularly Laplacian matrices.
  • Familiarity with eigenvalues and eigenvectors in linear algebra.
  • Knowledge of adjacency matrices and their properties.
  • Basic combinatorial matrix theory, including positive semi-definiteness.
NEXT STEPS
  • Study the properties of the graph Laplacian, specifically the combinatorial form ##L = D - A##.
  • Learn about eigenvalue interlacing and its implications for graph connectivity.
  • Explore the concept of convex components in graph theory and their relation to recurrent classes.
  • Review the Gerschgorin circle theorem and its application to positive semi-definite matrices.
USEFUL FOR

Mathematicians, computer scientists, and researchers in graph theory, particularly those interested in spectral graph theory and the properties of Laplacian matrices.

GabrielN00

Homework Statement


Let ##G## be a non-directed graph with non negative weights. Prove that the multiplicity of the eigenvalue ##0## of ##L_s## is the same as the number of convex components ##A_1,\dots, A_k## of the graph. And the subspace associated to the eigenvalue ##0## is generated by the vectors ##D^{1/2}1_{A_i}##.

Note ##L_s## denotes the Laplacian matrix

2. Proof
Let ##f, g## be two functions ##$V \to \mathbb R## satisfying ##g = D^{1/2} f##. Then ##\langle g, L_sg\rangle = \langle f, L f\rangle = \sum_{uv \in E} (f(u) - f(v))^2.##

This identity is first used as a proof that all the eigenvalues of ##L## (or ##L_s##) are nonnegative: the right-hand side of the identity is nonnegative by inspection, so the left must be, also.

Moreover, if ##g## is an eigenvector of the eigenvalue ##0##, then ##\langle g, L_sg\rangle = 0##, and this identity characterizes the vectors ##f## for which this can occur.

3. Problems with the proof
There are several things here I don't understand.

(1) Where are the graphs? I see no reference to them

(2) How does the proof show that the multiplicity of ##0## is the same as the number of CONVEX components of the graph?

(3) Why is the subspace of ##0## generated by the vectors in the problem?
 
Last edited by a moderator:
Physics news on Phys.org
GabrielN00 said:
(1) Where are the graphs? I see no reference to them

The graphs are in the Laplacian.
- - - - - -
As for the rest, you have left quite a few things out of this problem, starting with defining the adjacency matrix as ##A##. It's also not clear what ##A_1##, ..., ##A_k## represent -- are these principal submatrices? Further at times you use ##L## and others ##L_s## it isn't clear what the difference is -- if there is one, then it should be stated explicitly, and if there isn't one, then only use one of the two symbols.

You also haven't defined ##D##. In basic combinatorial form, ##D## is a diagonal matrix that has the out degree associated with a given node in ##D_{k,k}##. If that is how it is being used here, your Laplacian can be defined as ##L=D-A##. But I do not think that is how ##D## is being used here. If it is, and all weights are one, then ##D^{1/2}1_{A_i}## would be wrong. (After reading my post and page 15 of the link at the end, you should agree.) I'm aware of other ways to represent the graph Lapacian but we shouldn't have to guess what your book is referring to, nor is guessing really feasible.

- - - -
Given that the problem is underspecified I'll instead give an overview of how to think about this for that basic case of an Adjacency matrix with only 0s and 1s, and ##D## has the out degree (which equals in degree) and ##L = D- A##. This is sometimes referred to as a combinatorial or un-normalized form of the graph Laplacian.

I'll leave the conversion of this structure to whatever your problem actually wants, up to you.
- - - -

The graph is undirected hence ##A## is symmetric and so is ##L##. Your eigenvectors hence may be chosen to form an orthogonal basis. Direct application of Gerschgorin discs will also give you the result that ##L## is positive semi-definite.

By construction ##\mathbf 1^T \mathbf L \mathbf 1 = 0## because ##\mathbf L \mathbf 1 = \mathbf 0##. So a graph Laplacian is always singular.

As for the rest-- I like to start simple and build. I.e. start by considering the implications where there is only one eigenvalue equal to zero. This maps directly to a connected graph via the fact that any ##n-1 ## x ##n-1## principal submatrix of ##L## has a positive determinant (and indeed said determinant counts the number of spanning trees in the graph which is greater than zero iff the graph is connected). From here you can use eigenvalue interlacing or quadratic form arguments to carry this home.

Now assume there is more than one eigenvalue equal to zero. This means the graph is not connected. (Note: I tend to use terms like connected regions and distinct recurrent classes interchangeably.)

This suggests that there are multiple non-communicating recurrent classes (which tie directly in with convex components). In general if you have multiple non-communicating connected regions then it is smart to represent your adjacency in blocked form. To answer my own question earlier, I think ## A_1, ... , A_k## would be principal submatrices iff you use an appropriate blocked structures.

With this in mind and a careful look through of

https://www.cs.cmu.edu/~aarti/Class/10701/slides/Lecture21_2.pdf

you should be done.
- - -
regarding convex components in particular, the idea is that you can represent a connected region by a graph of convex components. I don't know why they phrased it this way or what the purpose is. Terms like connected region or recurrent class seem a lot more natural to me here.
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
7
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 21 ·
Replies
21
Views
6K
  • · Replies 0 ·
Replies
0
Views
1K