How Can Lagrange Multipliers Determine Maximum Shannon Entropy?

Irishdoug
Messages
102
Reaction score
16
Homework Statement
Given a random variable X with d possible outcomes and distribution p(x),prove that the Shannon entropy is maximised for the uniform distribution where all outcomes are equally likely p(x) =1/d
Relevant Equations
## H(X) = - \sum_{x}^{} p(x)log_{2}p(x) ##

##log_{2}## is used as the course is a Quantum Information one.
I have used the Lagrange multiplier way of answering. So I have set up the equation with the constraint that ## \sum_{x}^{} p(x) = 1##

So I have:

##L(x,\lambda) = - \sum_{x}^{} p(x)log_{2}p(x) - \lambda(\sum_{x}^{} p(x) - 1) = 0##

I am now supposed to take the partial derivatives with respect to p(x) and ##\lambda##, however the derivatives with respect to ##\lambda## will give 0 I believe as we have to constants, 1 and -1.

So ##\frac{\partial (- \sum_{x}^{} p(x)log_{2}p(x) - \lambda(\sum_{x}^{} p(x) - 1)) }{\partial p(x)} = -(log_{2}p(x) + \frac{1}{ln_{2}}+\lambda) = 0##

I am unsure what to do with the summation signs, and I am also unsure how to proceed from here. Can I please have some help.
 
Physics news on Phys.org
The partials with respect to ##\lambda## should recover your constraint functions since the ##\lambda## dependent terms in your Lagrangian are only ##\lambda## times your constraint functions. Also consider using an index:

Sample space is ##\{ x_1, x_2, \cdots x_d\}## and ##p_k = p(x_k)##

L(p_k, \lambda) = -\sum_{k} p_k \log_2(p_k) - \lambda C(p_k)
with ##C## your constraint function ##C(p_k) = p_1+p_2+\ldots +p_d - 1## and normalized probabilities equate to ##C=0##.

\frac{\partial}{\partial p_k} L =\frac{1}{\ln(2)} -\log_2(p_k) -\lambda \doteq 0
\frac{\partial}{\partial \lambda} L = C(p_k) \doteq 0
(using ##\doteq## to indicate application of a constraint rather than an a priori identity.)
This is your ##d+1## equation on your ##d+1## free variables ##(p_1, p_2, \ldots ,p_d, \lambda)##.
 
##|\Psi|^2=\frac{1}{\sqrt{\pi b^2}}\exp(\frac{-(x-x_0)^2}{b^2}).## ##\braket{x}=\frac{1}{\sqrt{\pi b^2}}\int_{-\infty}^{\infty}dx\,x\exp(-\frac{(x-x_0)^2}{b^2}).## ##y=x-x_0 \quad x=y+x_0 \quad dy=dx.## The boundaries remain infinite, I believe. ##\frac{1}{\sqrt{\pi b^2}}\int_{-\infty}^{\infty}dy(y+x_0)\exp(\frac{-y^2}{b^2}).## ##\frac{2}{\sqrt{\pi b^2}}\int_0^{\infty}dy\,y\exp(\frac{-y^2}{b^2})+\frac{2x_0}{\sqrt{\pi b^2}}\int_0^{\infty}dy\,\exp(-\frac{y^2}{b^2}).## I then resolved the two...
It's given a gas of particles all identical which has T fixed and spin S. Let's ##g(\epsilon)## the density of orbital states and ##g(\epsilon) = g_0## for ##\forall \epsilon \in [\epsilon_0, \epsilon_1]##, zero otherwise. How to compute the number of accessible quantum states of one particle? This is my attempt, and I suspect that is not good. Let S=0 and then bosons in a system. Simply, if we have the density of orbitals we have to integrate ##g(\epsilon)## and we have...
Back
Top