# Why do we apply sign conventions to optics formulae?

1. Dec 5, 2011

### hale2bopp

When we derive the relations between image distance(v), object distance(u) ond focal length(f) for mirrors or lenses (1/v+1/u=1/f and 1/v-1/u=1/f respectively), using the concept of similar triangles and alternate exterior angles, in the last step we apply standard sign conventions to get the above formulae.

But why do we need to apply the sign conventions to the standard formulae again while solving problems based on lenses and mirrors? Isn't that sort of cancelling out the effect of applying sign conventions at all?

I thought it might be because ultimately distances are always positive and this was to nullify the effect of taking distances measured opposite to direction of incident ray as negative(according to standard sign conventions). But then why apply sign conventions in the first place then?
Hale2bopp

2. Dec 5, 2011

### Stonebridge

If you use the Real is Positive sign convention, the formula for all lenses and mirrors is
1/v + 1/u = 1/f

There is no minus sign.
You then assign a positive value to any distance to a real image or object.
You assign a negative value to virtual image or object distances.
You assign a positive value to the focal length of converging lenses or mirrors and negative to diverging lenses or mirrors.

So you only apply the sign convention once, when you assign values to the formula.

The convention is necessary to differentiate between real and virtual images and converging and diverging lenses and mirrors.

3. Dec 7, 2011

### hale2bopp

We are currently using the positive when the distances measured from pole, in direction of incident ray, negative for distances measured from the pole opposite to direction of incident ray conventions, in school.
While your explanation makes sense, how would i apply the same logic to these conventions?