Binomial Theorem Application in Cauchy's and Sellmeier's Equations

AI Thread Summary
The discussion focuses on applying the binomial theorem to relate Cauchy's and Sellmeier's equations in optics. The main task is to show that Cauchy's equation approximates Sellmeier's equation when the wavelength is much larger than the characteristic wavelengths. The hint suggests rewriting Sellmeier's equation and expanding it using the binomial theorem, specifically after taking the square root of n^2. A key step involves expressing the term in a suitable form for expansion, leading to a clearer understanding of the relationship between the two equations. The conversation highlights the importance of manipulating the equations correctly to facilitate the application of the binomial theorem.
Luminous Blob
Messages
50
Reaction score
0
I am trying to do a question from Eugene Hecht's Optics book, which goes something like this:

Given the following equations:

Cauchy's Equation:

n = C_1 + \frac{C_2}{\lambda^2} + \frac{C_3}{\lambda^4} + ...

Sellmeier's Equation:

n^2 = 1 + \sum_{j} \frac{A_j\lambda^2}{\lambda^2-\lambda_0_j^2}<br />

where the A_j terms are constants and each \lambda_0_j is the vacuum wavelength associated with a natural frequency v_0_j, such that \lambda_0_jv_0_j = c.

Show that where \lambda &gt;&gt; \lambda_0_j, Cauchy's Equation is an approximation of Sellmeier's Equation.

Now it also gives a hint which is as follows:

Write the above expression with only the first term in the sum; expand it by the binomial theorem; take the square root of n^2 and expand again.

From the hint, I gather that it means to rewrite Sellmeier's Equation as:

n^2 = 1 + \frac{A\lambda^2}{\lambda^2 - \lambda_0^2}

From there though, I have no idea how to apply the binomial theorem to expand it. I just don't see how anything in that equation has the form (x+y)^n, except for where n = 1.

If anyone can explain to me how to apply the binomial theorem to the equation, or if I've misunderstood what the hint means, it would be much appreciated.
 
Physics news on Phys.org
You can use the binomial theorem to expand (1+x)^{1/2} when x<<1.
 
So you mean first take the square root of both sides, then expand it using the binomial theorem , letting x = \frac{A\lambda^2}{\lambda^2 - \lambda_0^2}, rather than first applying the binomial theorem, then taking the square root of both sides and then expanding again like the hint suggests?
 
Rewrite \frac{A_j\lambda^2}{\lambda^2-\lambda_0_j^2} as

\frac{A_j}{\lambda^2}\frac{1}{1-\frac{\lambda_0_j^2}{\lambda^2}} and expand the second part as

\frac{1}{1-x^2} \approx 1 - x^2 + x^4 - x^6 \ldots where x = \frac{\lambda}{\lambda_0_j}
 
Last edited:
Aah, I didn't think to do that. Thanks, that was a great help.
 
I multiplied the values first without the error limit. Got 19.38. rounded it off to 2 significant figures since the given data has 2 significant figures. So = 19. For error I used the above formula. It comes out about 1.48. Now my question is. Should I write the answer as 19±1.5 (rounding 1.48 to 2 significant figures) OR should I write it as 19±1. So in short, should the error have same number of significant figures as the mean value or should it have the same number of decimal places as...
Thread 'A cylinder connected to a hanging mass'
Let's declare that for the cylinder, mass = M = 10 kg Radius = R = 4 m For the wall and the floor, Friction coeff = ##\mu## = 0.5 For the hanging mass, mass = m = 11 kg First, we divide the force according to their respective plane (x and y thing, correct me if I'm wrong) and according to which, cylinder or the hanging mass, they're working on. Force on the hanging mass $$mg - T = ma$$ Force(Cylinder) on y $$N_f + f_w - Mg = 0$$ Force(Cylinder) on x $$T + f_f - N_w = Ma$$ There's also...
Back
Top