- #1
- 5
- 0
I'm looking at concentrating light (e.g. sunlight) with a lens, and trying to figure out the maximum concentration I can get. If you have a perfectly collimated source, a lens can focus that light down to a point, but when the source has some divergence, the point you can focus down to will have some finite area.
If a lens has a focal length f, then the radius of the spot size you get is f*tanθ, where θ is the angle of divergence of the source (this would be ~0.3° for the sun, for example). The second figure on this page should make the reason for this clear. The concentration is then the ratio of of the lens area to spot area, which is given by ∏R^2/∏(f*tanθ)^2 if R is the radius of the lens.
What I'm confused about is that this seems to contradict the results you get from an etendue type argument. If a lens with f = R is used (about the shortest focal length you can get for a glass/plastic lens) then the concentration ratio given above simplifies to 1/(tanθ)^2. For small θ (which will generally be the case in concentration applications), this is very close to 1/(sinθ)^2 which is the limit from the etendue perspective when your divergence angle after the spot is 90°. However if you look at the geometric setup of a concentrating lens when f = R, the divergence angle after the spot will be ~45°. So in my original way of approaching the problem, etendue is not conserved.
Where did I go astray in my approach or what am I misunderstanding? Many thanks in advance to anyone who can help me understand this!
If a lens has a focal length f, then the radius of the spot size you get is f*tanθ, where θ is the angle of divergence of the source (this would be ~0.3° for the sun, for example). The second figure on this page should make the reason for this clear. The concentration is then the ratio of of the lens area to spot area, which is given by ∏R^2/∏(f*tanθ)^2 if R is the radius of the lens.
What I'm confused about is that this seems to contradict the results you get from an etendue type argument. If a lens with f = R is used (about the shortest focal length you can get for a glass/plastic lens) then the concentration ratio given above simplifies to 1/(tanθ)^2. For small θ (which will generally be the case in concentration applications), this is very close to 1/(sinθ)^2 which is the limit from the etendue perspective when your divergence angle after the spot is 90°. However if you look at the geometric setup of a concentrating lens when f = R, the divergence angle after the spot will be ~45°. So in my original way of approaching the problem, etendue is not conserved.
Where did I go astray in my approach or what am I misunderstanding? Many thanks in advance to anyone who can help me understand this!