I need some clarification on definition of Entropy. According to the simple example of the egg in the kitchen, the entropy of the whole egg is lower than that of a cracked egg. This, it is said due to the fact that, there are more ways to be broken than a whole egg. But, if one talks of a particular Macro state, it must refer to the macro state of the ‘particular cracked egg’ – the egg cracked exactly in a specific fashion. In such a case how can it be claimed that there are more micro-states corresponding to the cracked-egg than a whole egg. Or, one could be comparing a whole egg and ‘not-whole’ egg. Then, of course there are more ways of ‘not- whole’ egg than a whole egg. Then again, we can consider an egg ‘cracked-in- a-specific-fashion’ and the one ‘not-cracked-in- that-specific-fashion’. In this case there are more possibilities of the later than the former and the later could be a whole-egg. Does it mean that, in this case the whole-egg has more entropy than the cracked one? The same could also apply to the fallen down and broken pieces of a cup – broken and scattered in that particular fashion. Obviously, I am going wrong somewhere. Please help.
Entropy is "not knowing." It's not correct to equate a general cracked egg with an egg that has a perfectly specified crack; their entropies are different. I'm not accustomed to the egg analogy; I'm more used to thinking about it in terms of a deck of cards. A randomly shuffled deck has a higher entropy than an ordered deck, even through it's the same deck. You don't know the arrangement of cards. However, if you took a specific randomly shuffled deck and called its specific sequence the "mviswanathan sequence," then any deck with the mviswanathan sequence would have a low entropy--the same as an ordered deck--because you know the position of each card with no uncertainty. This is why I disagree with the statement because when you focus on an "egg cracked exactly in a specific fashion," you're not looking at the same system, but on a different, lower-entropy system. I hope that helps resolve the apparent contradiction.
Mapes is correct- the broken egg/shuffled deck/broken cup has a high entropy because many microstates (the specific state of the brokeen egg/cup) correspond to the same macrostate- a state that is an average over certain thermodynamic variables. It's the same reason why a gas in a box, if specified to exist at a certain temperature T one macrostate), consists of an extremely large number of microstates- the specific positions and velocities of each molecule in the box.
It really depends on how you define "macro" vs. "micro" states. Different possible ways of cracking the egg are distinguishable on visual inspection at a macroscopic level, so I suppose you could define them all as different macrostates. The cracked egg vs. whole egg is really just meant as a conceptual analogy though, I don't think there's any "standard" way to define the macrostates for such a system, normally thermodynamics deals with more macroscopically uniform systems like gas-filled boxes where macrostates are defined in terms of the value of macro-parameters like temperature and pressure.
I did not understand the first statement. In the end, does it mean that there is no absolute value of entropy - and it depends on from where/what you are looking? Then how can one have a statement that the entropy of a closed system keeps going up? May be it has to do with what a "system" means. Still on the egg-story - the whole egg and the broken egg with all the pieces, do they not refer to the same system since broken egg exactly has all the components of the original? Or may be some difference, considering the binding forces/stresses in the original egg which got released (may be I am not using the right words) when the egg breaks?
This is an excellent point. We do have an absolute value for entropy because of the conventions we adopt. The eggs, decks of cards, etc. are analogies, so let's go to the better example of a gas in a box that Andy brought up. Our convention is to measure the bulk temperature or total energy, which is relatively easy to do. (These are macrostate variables.) We cannot determine the momentum and position of each atom. But there are many possible values of momentum and position (i.e., possible microstates) that could produce a given temperature or total energy. We will never know which microstate we're in, and it changes every instant anyway. This is the "not knowing." We take the entropy to be proportional to the logarithm of the number of microstates, and we set a reference point of zero (or as close to zero as to be unmeasurable) at a temperature of absolute zero for a system in equilibrium. These conventions are what allow us to talk quantitatively about entropy and changes in entropy.
Because regardless of how you choose to define your macrostates (what physicists would call your choice of 'coarse-graining' for the states of the system), the entropy of a given macrostate is always defined in terms of the logarithm of the number of microstates associated with that macrostate, and it can be proved that if we start with a randomly-chosen microstate from whatever the initial macrostate is, the underlying dynamics of the system are always more likely to take the system to future macrostates that have a greater number of microstates associated with them. I believe you only need a few basic assumptions about the dynamics governing the system to prove this, such as the assumption that the dynamics are such that Liouville's theorem is respected (in classical statistical mechanics anyway, quantum statistical mechanics might require some different assumptions). I haven't studied this stuff in a while, but my understanding was that at a conceptual level, one way of putting Liouville's theorem is that if you pick any region of the "phase space" (an abstract space where every point represents a particular microstate, and macrostates would be volumes containing many points) and assume the system is equally likely to occupy any point in that region, then if you evolve all these points forward to see the region of phase space that this set of systems occupies at some later time, then the volume of the later region will be the same--it can be seen as a kind of conservation law for phase space volume over time. You are free to start from a volume of phase space representing a macrostate that is far from equilibrium. One can see intuitively why this sort of thing would be helpful in proving the second law, since it means there can't be any small volumes of phase space that larger volumes are being "attracted" to, and we know that lower-entropy macrostates represent a much smaller proportion of the total volume of phase space than higher-entropy macrostates. See the discussion here, for example. Roger Penrose also has a good discussion of this stuff on pages 228-238 and 309-317 of his book The Emperor's New Mind (if you read that book, keep in mind that most mathematicians would disagree with his idiosyncratic ideas about human mathematical ability being noncomputable, and most physicists would disagree with his speculations about quantum gravity--his discussions of mainstream physics ideas are quite good though).
Thanks everyone. I just joined this forum and frankly did not expect such an active participation. I have decided to go though the information and links suggested (and come back if I still have some doubts) :)
Mapes' "not knowing" was good. I helped in getting the ideas of Macro and Micro states. I could understand the references by Thomas T. http://tim-thompson.com/entropy1.html I also located http://www.tim-thompson.com/entropy1.html and have some clarity. JesseM's explanation and Liouville's theorem was a little too high for me. But I definitely appreciate the help. Thanks.