# Why Does This Adjacency Matrix Theorem Work?

physicsguy101
I'm curious as to why the following theorem always works with a graph of vertices and the number of steps between the vertices when placed into an adjacency matrix.

If A is the adjacency matrix of a graph G (with vertices v1,…, vn), the (i, j)-entry of Ar represents the number of distinct r-walks from vertex vi to vertex vj in the graph.

If anyone can provide any insight or explanation, I would appreciate it.

The theorem makes sense to me, but I'm just unsure mathematically why exactly it works the way it does.

Thanks!

## Answers and Replies

Homework Helper
Your "Ar" is, of course, the power, $A^r$.

In an adjacency matrix, A, $a_{ij}$ is 1 if i is "adjacent to j- if there is a path from vertex i to vertex j- 0 otherwise. In $A^2$, $(a^2)_{ij}= \sum_k a_{ik}a_{kj}$.

Each term in that sum will be either 0 or 1 and 1 only if both $a_{ik}$ and $a_{kj}$ are 1. That is, only if there is a path from vertex i to k and a path from k to j- so there is a path going from i to k and then from k to j- a path from i to j of length 2. That sum, then, counts the number of length 2 paths from i to j.

Since $A^r= A^{r-1}A$ you can apply an inductive argument: $(a^{r})_{ij}= \sum_k (a^{r-1})_{ik}a_{kj}$ sums the number of paths of length r-1 from i to k and then from k to j.

physicsguy101
HallsofIvy, thank you very much. Your response pretty much summed up what I was looking for (sorry, no pun intended!).

I really appreciate it, and understand the concept much more clearly now. Thanks again.

Last edited: