- #1

- 2

- 0

## Main Question or Discussion Point

Suppose there are n items which are belong to two sets, A and B, such that A intersects B yields a non-empty set C. I apply the following procedure:

-------------------------------------------------

Allocate each of the n items to a single item bin (so there will be n single-item bins).

Repeat the following two steps many times:

(1) From the n items, randomly select two separate items i and j

(2) If items i and j are from the same set, move i to j's bin.

-------------------------------------------------

Questions: Will this procedure always converge to a fixed number of non-empty bins?

If yes, can you provide some clues how I can prove this?

Here are some of my thoughts (pls correct me if I am wrong):

The answer is obvious when A and B are disjoint --- there are two non-empty bins left after many iterations.

I find it a bit tricky if A intersects B yields a non-empty set C, then we may end up with one non-empty bin (especially when |C| is a lot larger than |A| and |B|), or we may end up with two non-empty bins (especially when |C| is a lot smaller than |A| and |B|). In any case, I am just interested to show that the procedure will (almost surely) converge to some constant number of non-empty bins, after many iterations.

Any help will be appreciated.

Thanks!

-------------------------------------------------

Allocate each of the n items to a single item bin (so there will be n single-item bins).

Repeat the following two steps many times:

(1) From the n items, randomly select two separate items i and j

(2) If items i and j are from the same set, move i to j's bin.

-------------------------------------------------

Questions: Will this procedure always converge to a fixed number of non-empty bins?

If yes, can you provide some clues how I can prove this?

Here are some of my thoughts (pls correct me if I am wrong):

The answer is obvious when A and B are disjoint --- there are two non-empty bins left after many iterations.

I find it a bit tricky if A intersects B yields a non-empty set C, then we may end up with one non-empty bin (especially when |C| is a lot larger than |A| and |B|), or we may end up with two non-empty bins (especially when |C| is a lot smaller than |A| and |B|). In any case, I am just interested to show that the procedure will (almost surely) converge to some constant number of non-empty bins, after many iterations.

Any help will be appreciated.

Thanks!