Why incidence and adjacency matrices (graph theory)h

Click For Summary
Adjacency and incidence matrices are essential in graph theory for representing relationships between nodes, despite their lack of visual clarity compared to graphical representations. The discussion highlights a common misconception that visual proofs are necessary for understanding mathematical concepts, particularly in complex graphs. It emphasizes that mathematical analysis often relies on symbolic methods rather than graphical interpretations. The challenge of visualizing large graphs, such as those with 5,000 nodes, raises questions about the effectiveness of visual proofs. Ultimately, the conversation underscores the importance of matrices in graph theory, even when they may seem less intuitive than graphical forms.
Avichal
Messages
294
Reaction score
0
My book introduces the concept of adjacency and incidence matrices but I don't understand its use.
Normally we shift from mathematical symbols and representation to graphical interpretation like in Cartesian graphs - to visualize functions better we draw them on a graph.
But here we are doing the opposite. From nice graphs we are shifting towards matrices that do not help us much visually
 
Physics news on Phys.org
Avichal said:
Normally we shift from mathematical symbols and representation to graphical interpretation like in Cartesian graphs - to visualize functions better we draw them on a graph.

That's false. For example, we generally don't compute derivative of a function by graphing it and then trying to do some geometric construction on a graph. My impression of visual presentations is that they are rather like decorations that accompany the mainstream of mathematics as it follows a mostly symbolic course.

As to graph theory, what would a visual proof about a graph with 5,000 nodes look like? What are the rules of the game for visual proofs? Do you say, "See, you can look at the picture and tell that..."?
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
27
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 10 ·
Replies
10
Views
771