Understanding Brillouin's Entropy Decrease

AI Thread Summary
Brillouin's transition from k·Δ(log P) to -k·(p/P_0) is clarified through the Taylor expansion of log(1 - ε), where ε = p/P_0, valid under the condition that p is much smaller than P_0. The discussion highlights that Brillouin's ideas, particularly regarding "negentropy," are considered obsolete in modern physics. Participants emphasize the importance of referencing contemporary literature, such as Bennett's work, for a better understanding of information theory and entropy. Additionally, there is confusion regarding the use of k·T instead of the expected 3/2k·T in the context of thermal energy, with a need for further clarification on this point. Overall, the conversation reflects a blend of historical context and modern critique of Brillouin's contributions.
LTP
Messages
24
Reaction score
0
I do not quite understand how Brillouin goes from k\cdot \Delta (\log P) to -k\cdot \frac{p}{P_0} in this context:

Once the information is obtained, it can be used to decrease the entropy of the system. The entropy of the system is
S_0=k\ln P_0
according to Boltzmann's formula, where P_0 represents the total number of microscopic configurations (Planck's "complexions") of the system. After the information has been obtained, the system is more completely specified. P is decreased by an amount p and P_1=P_0-p . The entropy decrease is then
\Delta S_i = S-S_0 = k\cdot \Delta (\log P) = -k\cdot \frac{p}{P_0}
It is obvious that p<<P_0 in all practical cases.
from "Maxwell's Demon cannot operate: Information and Entropy", L. Brillouin, 1950.

Could anybody offer a meaningful explanation?

[I added the "The entropy decrease is then"-bit because the tex wouldn't display properly.]
 
Last edited:
Science news on Phys.org
Gosh, why are you reading THAT?

[EDIT: In this post I was responding to an earlier version of Post #1 in this thread, in which due to what turned out to be a spelling error, the OP appeared to mention the sternly deprecated term "negentropy", which provoked me to order all hands to action stations, as it were! See my Post #5 below for further discussion of this misunderstanding.]

I hope you are not reading that paper (BTW, shouldn't you cite it properly?) because someone recommended it but only because you stumbled over it, not realizing it's a bit like stumbling over and studying a treatise on Ptolemy's model of the solar system, in ignorance of the fact that this model was discarded long ago!

Similarly, Brillouin eventually developed his ideas on information theory into a book (L. Brillouin, Science and information theory, Academic Press, 1962) which was obsolete when it came out and has long ago been tossed by mathphy researchers into the dustbin of failed scientific monographs. In particular, the concept of negentropy (you mispelled the word!), of which he made such a fuss in that book, was never a sensible quantity to define, was never taken seriously by the mathematical literati, never became standard in math/physics and nowadays is only used by persons (mostly biologists) who don't realize how silly it makes them sound (kinda like boasting about your gaily painted new donkey cart, not realizing that all your neighbors drive Ferrari roadsters).

A good place to start learning about more modern approaches might be Thomas & Cover, Elements of Information Theory, Wiley, 1991, followed by the old Sci. Am article of Charles Bennett. ("Explanations" of Maxwell's demon remain controversial to this day, but Brillouin's ideas were firmly discarded long long ago; Bennett's ideas are least still seriously discussed.)

With that out of the way, if you promise to obtain a modern book, we can discuss the underlying question (discarding the absurd notion of "negentropy", which isn't helping here or anywhere else that I know of).
 
Last edited:
Sorry, it should have been "entropy", not "netropy" (or "negentropy"). I've corrected it now [EDIT: I also corrected the formulas, so please reread].

Yes, I am reading this paper, but only as a "historical" document. I am aware that Brillouin's ideas are obsolete. I do have Bennett's article (and Landauer article on erasure).
This article, as well as the two others and numerous more, is printed in "Maxwell's demon 2" by Leff and Rex which is basically a compilation of different more or less relevant articles about Maxwell's demon, Smoluchowski's trapdoor and the Szilard engine.Anyways, back to the original question. I'm sure Brillouin could do his math, I'm just not quite sure how :)
 
Last edited:
LTP said:
I do not quite understand how Brillouin goes from k\cdot \Delta (\log P) to -k\cdot \frac{p}{P_0} in this context:


from "Maxwell's Demon cannot operate: Information and Entropy", L. Brillouin, 1950.

Could anybody offer a meaningful explanation?

[I added the "The entropy decrease is then"-bit because the tex wouldn't display properly.]

he is using \Delta log P = log(P_0-p) - log(P_0) = log(1 - \frac{p}{P_0}) \approx -\frac{p}{P_0}

where the last step is the first term of the Taylor expansion of log(1-epsilon) so it's valid as long as p is much smaller than P_0.
 
I assume Nrqed (who said what I was going to say) cleared up the problem, but I can't help adding some remarks on the "negentropy" flap:

LTP said:
Sorry, it should have been "entropy", not "netropy" (or "negentropy")...

Yes, I am reading this paper, but only as a "historical" document. I am aware that Brillouin's ideas are obsolete.

Well, as this shows, mentioning "negentropy" or "Shannon-Weaver entropy" [sic] in my presence is like waving a red flag--- I'll charge!

When you have a spare half hour, you might get a kick out of [thread=200063]this thread[/thread] (gosh, 63 threads earlier and I would started the 200,000 PF thread since the dawn of time!) and [thread=199303]this thread[/thread] which are examples of threads in which various posters bewail a phenomenon well supported by observation, namely that few newbie PF posters seem to bothering to
  • read carefully,
  • write carefully, or even to obey such basic rules as checking their spelling
As the cited threads show, there has been some spirited discussion about how to try to train them to do things the scholarly way.
 
Last edited:
nrqed said:
he is using \Delta log P = log(P_0-p) - log(P_0) = log(1 - \frac{p}{P_0}) \approx -\frac{p}{P_0}

where the last step is the first term of the Taylor expansion of log(1-epsilon) so it's valid as long as p is much smaller than P_0.
Ah yes, thank you.

Chris Hillman said:
I assume Nrqed (who said what I was going to say) cleared up the problem, but I can't help adding some remarks on the "negentropy" flap:
Well, as this shows, mentioning "negentropy" or "Shannon-Weaver entropy" [sic] in my presence is like waving a red flag--- I'll charge!

When you have a spare half hour, you might get a kick out of [thread=200063]this thread[/thread] (gosh, 63 threads earlier and I would started the 200,000 PF thread since the dawn of time!) and [thread=199303]this thread[/thread] which are examples of threads in which various posters bewail a phenomenon well supported by observation, namely that few newbie PF posters seem to bothering to
  • read carefully,
  • write carefully, or even to obey such basic rules as checking their spelling
As the cited threads show, there has been some spirited discussion about how to try to train them to do things the scholarly way.

Yes, sorry, I will take that into consideration next time.
About negentropy, Brillouin mentions in the next paragraph :)While we are at it, could you explain why it is k*T, and not 3/2k*T in this context:
Our system is composed of the following elements:
1) A charged battery and an electric bulb, representing the electric torch
2) A gas at constant temperature T_0 contained in Maxwell's enclosure [...]
[..]
The battery heats up the filament at a high temperature T_1:
T_1 >> T_0
This condition is required, in order to obtain visible light:
h*v >> k*T_0
that can be distinguished from the background of the blackbody radiation in the enclosure at temperature T_0
So E_light = h*v, but what is k*T_0? It can't be the thermal energy of the gas particles, since the 3/2-factor is missing, or what?
 
I need to calculate the amount of water condensed from a DX cooling coil per hour given the size of the expansion coil (the total condensing surface area), the incoming air temperature, the amount of air flow from the fan, the BTU capacity of the compressor and the incoming air humidity. There are lots of condenser calculators around but they all need the air flow and incoming and outgoing humidity and then give a total volume of condensed water but I need more than that. The size of the...
Thread 'Why work is PdV and not (P+dP)dV in an isothermal process?'
Let's say we have a cylinder of volume V1 with a frictionless movable piston and some gas trapped inside with pressure P1 and temperature T1. On top of the piston lay some small pebbles that add weight and essentially create the pressure P1. Also the system is inside a reservoir of water that keeps its temperature constant at T1. The system is in equilibrium at V1, P1, T1. Now let's say i put another very small pebble on top of the piston (0,00001kg) and after some seconds the system...
Back
Top