GabrielN00 said:
(1) Where are the graphs? I see no reference to them
The graphs are
in the Laplacian.
- - - - - -
As for the rest, you have left quite a few things out of this problem, starting with defining the adjacency matrix as ##A##. It's also not clear what ##A_1##, ..., ##A_k## represent -- are these principal submatrices? Further at times you use ##L## and others ##L_s## it isn't clear what the difference is -- if there is one, then it should be stated explicitly, and if there isn't one, then only use one of the two symbols.
You also haven't defined ##D##. In basic combinatorial form, ##D## is a diagonal matrix that has the out degree associated with a given node in ##D_{k,k}##. If that is how it is being used here, your Laplacian can be defined as ##L=D-A##. But
I do not think that is how ##D## is being used here. If it is, and all weights are one, then ##D^{1/2}1_{A_i}## would be wrong. (After reading my post and page 15 of the link at the end, you should agree.) I'm aware of other ways to represent the graph Lapacian but we shouldn't have to guess what your book is referring to, nor is guessing really feasible.
- - - -
Given that the problem is underspecified I'll instead give an overview of how to think about this for that basic case of an Adjacency matrix with only 0s and 1s, and ##D## has the out degree (which equals in degree) and ##L = D- A##. This is sometimes referred to as a combinatorial or un-normalized form of the graph Laplacian.
I'll leave the conversion of this structure to whatever your problem actually wants, up to you.
- - - -
The graph is undirected hence ##A## is symmetric and so is ##L##. Your eigenvectors hence may be chosen to form an orthogonal basis. Direct application of Gerschgorin discs will also give you the result that ##L## is positive semi-definite.
By construction ##\mathbf 1^T \mathbf L \mathbf 1 = 0## because ##\mathbf L \mathbf 1 = \mathbf 0##. So a graph Laplacian is always singular.
As for the rest-- I like to start simple and build. I.e. start by considering the implications where there is only one eigenvalue equal to zero. This maps directly to a connected graph via the fact that any ##n-1 ## x ##n-1## principal submatrix of ##L## has a positive determinant (and indeed said determinant counts the number of spanning trees in the graph which is greater than zero iff the graph is connected). From here you can use eigenvalue interlacing or quadratic form arguments to carry this home.
Now assume there is more than one eigenvalue equal to zero. This means the graph is not connected. (Note: I tend to use terms like connected regions and distinct recurrent classes interchangeably.)
This suggests that there are multiple non-communicating recurrent classes (which tie directly in with convex components). In general if you have multiple non-communicating connected regions then it is smart to represent your adjacency in
blocked form. To answer my own question earlier, I think ## A_1, ... , A_k## would be principal submatrices iff you use an appropriate blocked structures.
With this in mind and a careful look through of
https://www.cs.cmu.edu/~aarti/Class/10701/slides/Lecture21_2.pdf
you should be done.
- - -
regarding convex components in particular, the idea is that you can represent a connected region by a graph of convex components. I don't know why they phrased it this way or what the purpose is. Terms like connected region or recurrent class seem a lot more natural to me here.