# Fighting the giant component of a random graph

1. Jan 27, 2012

### fr.jurain

Hi all,

Here's a particular procedure to deal with the giant component of a random graph. Is it "almost always" as efficient as experiments suggest? If so, can you shed some light as to how the giant component evolves under this procedure?

Suppose we're given a large graph with a large number M of edges, as generated in the setting of a random graph model, e. g. the Erdös-Rényi model. The interesting case is where the largest connected component in the graph, containing S* nodes, is a giant component, i.e. S* is more than o(1) times the total number of the nodes. We're then given a free rein to select a subset A of nodes, of size S(A), thereby eliminating all edges leading to these nodes.
Suppose we build A one node at a time: A0={}, A1,... Ak, each time using the following rule: trying each of the remaining nodes in turn, compute what S*, the size of the largest connected component, would be, if all edges leading to nodes in Ai (= Ai-1 + the node actually on trial) were removed; then, select the node which minimized S*. Observe how S* + S(A), the cost criterion in the practical problem I investigate, evolves as A grows under the above policy; stop growing A when S* + S(A) has reached its global minimum.

Experiments with RG's of modest size (a few hundreds of edges) strongly suggest all giant components disappear when A still contains only O(log M) nodes. Hence the following questions:
1) is it "almost always" true, e. g. for Erdös-Rényi graphs?
2) if so, how does S* evolve as we grow A?
3) how is distributed S* + S(A) at the optimum found by the above procedure? Is it O(log M), O(M**2/3), sthing else?
4) How far is S* + S(A) at the optimum found, from the absolute best achievable on the given graph?