# Determining the focal length of a gradient index lens

Tags:
1. Apr 9, 2015

### marcnn

1. The problem statement, all variables and given/known data
There are three subquestions in this question, all marked bold.

Let's consider a gradient index lens of thickness $d$, whose refractive index changes with the distance from the axis with the following formula
$$n(r) = n_1 + a r^2$$.

Determine the lens's focal length $f$. Consider only rays falling parallel to the axis, whose distance from it $r_1$ is small. Assume that $\lvert \Delta r \rvert << d$, $\lvert \Delta r \rvert << r_1$.

2. Relevant equations
$$x(y) = d + \sqrt{R^2 - y^2} - R \approx d - \frac {y^2}{2R}$$
$$l(y) = x(y) \cdot n(y) + n_0\sqrt{[f + d - x(y)]^2 + y^2}$$

3. The attempt at a solution
First of all, by the Fermat's principle we know that the light will travel with the path, for which the time or, equivalently $\int_X^Y n dS$ is minimal.

So I would say that the ray will approach the axis for $a>0$. But in fact it is the opposite.
Why?

We can use the principle so as to calculate the focal length. The thickness of the lens depending on the distance from the axis is
$$x(y) = d + \sqrt{R^2 - y^2} - R \approx d - \frac {y^2}{2R}$$

Let the ray enter the lens at a distance $y$ from the axis at a point $A$. Let it hit the lens's axis at point $B$.
If $f > 0$, then length of the ray's optical path between points $A$ and $B$ is
$$l(y) = x(y) \cdot n(y) + n_0\sqrt{[f + d - x(y)]^2 + y^2}$$ since $\lvert \Delta r \rvert << r_1$.

But if $f<0$ then no ray hits the optical axis (only the rays' extensions). It is claimed that in this case
$$l(y) = x(y) \cdot n(y) - n_0\sqrt{[f + d - x(y)]^2 + y^2}$$

Later on it is claimed that $f$ is the focal length, then, $l(y)$ does not depend on $y$. Why?

2. Apr 9, 2015

### haruspex

The statement of the problem does not mention a radius of curvature, so maybe it is intended to be a flat disk? The changing index will still cause it to act as a lens.
For a > 0, a flat disk would behave like a concave lens, since its thickness effectively increases with distance from the axis. If it also has curvature then whether it is efectively concave or convex will depend on the relationship between a and R.

3. Apr 9, 2015

### marcnn

Whooops! I forgot to mention it. The lens is planoconvex of radius $R$

4. Apr 10, 2015

### haruspex

I think I can explain the sign change.
The term is the distance the ray has to travel through the air to reach the point B. In the second case, B is in front of the lens, so it is actually a distance that the ray did not have to travel in coming from B. Hence its contribution to path length is negative.

5. Apr 10, 2015

### marcnn

Yes, the light didn't have to travel the distance, indeed. but why do we take it in account then instead of ignoring?

6. Apr 10, 2015

### haruspex

There may be a better way to explain it in physical terms, but here's one way.
We need to find path lengths to a point B on the axis. But the ray never hits the axis in this case. So instead we consider the path length to some point C 'after' intersecting the axis (thinking of the ray as having come through air from a point B on the axis in front of the lens) then subtract the extra distance it has putatively traveled in air from B to C. The point C we choose is just after exit from the lens, but it could have been anywhere later to get the same result.