Why is a current limiting resistor necessary in an LED circuit?

In summary: LED and you want to power it with a battery. Lets say you have a 2v lead acid battery. According to most answers to this question, you cannot use the battery to power the LED. The LED will not work and the outcome is unspecified.
  • #1
metiman
87
3
Let's say that I have an LED where If=20mA and Vf(typ)=2v. I want to power it directly with a 2v lead acid battery. According to most answers to this question I have read this will either not work (with the outcome unspecified), will destroy the LED, or is just a bad idea because of how an ideal LED is supposed to behave even though the element in question is not an ideal LED. Oh. Let me add that for the purposes of this discussion the LED in question is a real world non-ideal device.

Some answers just muddy the water by talking about the difference between linear and nonlinear resistance. The fact that V/I is nonlinear for an LED and that small changes in voltage across it can result in a large difference in current through it would not seem to explain why extra voltage would be required in this example so it could be turned into heat by another V/I element in the circuit. All that means is you have to be careful not to exceed Vf(max). In this example that seems pretty much guaranteed.

Some explanations go into how an LED is a device that is controlled by current. Not by voltage. But I am guessing that I won't get any current through the LED if I don't put any voltage across it. I also suspect that the current through it will vary at least somewhat predictably according to the V/I curve provided by the manufacturer. Increase the voltage across the device and I bet the current through it will increase pretty much as the manufacturer describes.

That's another reason that is sometimes given. Individual variance between parts. But you could always test each LED yourself measuring the current through the device for a series of different voltages to derive your own V/I curve. Or just use the worst case from testing several parts and make sure that your source voltage never exceeds whatever you determine the real Vf(max) to be.

I am willing to accept it is a bad idea, but I am interested in the real reason why even when your source voltage just happens to be exactly equal to Vf you can't use the source because it does not allow for a voltage drop across a current limiting resistor.

The way I see it there are 5 possibilities in this experiment.
1. The LED does not light up.
2. The LED emits light, but at a lower intensity than it could with a higher voltage source and a resistor.
3. The LED emits light at the intensity for which it was designed.
4. The LED is overdriven and emits brightly for a (comparitively) short time before burning out.
5. The LED burns out instantly.
 
Engineering news on Phys.org
  • #2
2V is only typical, you can have different turn on voltage between LEDs. Once it turn on, current response to the exponential to the applied voltage so you can easily burn the LED with the 2V battery.

The turn on voltage drift with temperature so intensity change as it heats up.

Why are you pushing it, everyone tell you it's a bad idea, I am telling you it's a really bad idea. Obviously you got all the reasons already and you don't trust or buy it. Do you want to make a working circuit or do you just want to burn some LEDs?

You want to run it at constant current rather than constant voltage.
 
  • #3
I agree with yungman. You must either accept what you have been told, try it in practice and find that other people are, in fact, right. OR - you can look further into the theory and show for yourself, theoretically that what they tell you is correct.
There is no 'they are wrong', alternative answer.
 
  • #4
I have used LEDs without a resistor when I was younger. And it worked for the few hours when I was playing with it. It is garbagety design practice as the diode becomes a better conductor, when it gets hot, going into thermal runaway and is likely to burn. You don't need to be far above the design voltage for that to happen. But if you are lucky it will simply work. Just try it. Diodes are not that expensive.
 
  • #5
LED's are designed for a specific current to run through them as you say you have heard some people say.

At that specific current, a given amount of electrons will pass through it and excite the atoms at such an amount that the "perfect" brightness is accomplished.

Since the LED is a DIODE its electrical characteristic is the same - at a given voltage the current goes up very, very fast for every increment in voltage. That voltage is also given by the manufacturer.

So you know the typical voltage across the LED when it is conducting, and you know how much current you need to pass through it.

Lets say you need to pass 10 mA through an LED that has a turn-on voltage of 1.5 V. You have a 2 Volt Power supply. If you connected that across the LED it would conduct a very high current, and possibly burn out.

So you put in a resistor that takes 0.5 volts and conducts 10 mA. Which gives us 500/10 Ohm = 50 Ohm

NOW - IF you decide to connect the voltage directly. And let's say you connect 1.5 Volts so it all fits.

As the LED gets hotter, its turn on voltage will drop. Meaning more current, meaning more heat, meaning lower turn on voltage voltage, meaning more current, meaning more heat. As Deadbeef said, Thermal Runaway.

Another more technical way of putting it is that a resistor would make the circuit more temperature stable - also called a higher Q-factor I believe.
 
  • #6
metiman said:
Let's say that I have an LED where If=20mA and Vf(typ)=2v. I want to power it directly with a 2v lead acid battery. According to most answers to this question I have read this will either not work (with the outcome unspecified), will destroy the LED, or is just a bad idea because of how an ideal LED is supposed to behave even though the element in question is not an ideal LED. Oh. Let me add that for the purposes of this discussion the LED in question is a real world non-ideal device.

Some answers just muddy the water by talking about the difference between linear and nonlinear resistance. The fact that V/I is nonlinear for an LED and that small changes in voltage across it can result in a large difference in current through it would not seem to explain why extra voltage would be required in this example so it could be turned into heat by another V/I element in the circuit. All that means is you have to be careful not to exceed Vf(max). In this example that seems pretty much guaranteed.

Some explanations go into how an LED is a device that is controlled by current. Not by voltage. But I am guessing that I won't get any current through the LED if I don't put any voltage across it. I also suspect that the current through it will vary at least somewhat predictably according to the V/I curve provided by the manufacturer. Increase the voltage across the device and I bet the current through it will increase pretty much as the manufacturer describes.

That's another reason that is sometimes given. Individual variance between parts. But you could always test each LED yourself measuring the current through the device for a series of different voltages to derive your own V/I curve. Or just use the worst case from testing several parts and make sure that your source voltage never exceeds whatever you determine the real Vf(max) to be.

I am willing to accept it is a bad idea, but I am interested in the real reason why even when your source voltage just happens to be exactly equal to Vf you can't use the source because it does not allow for a voltage drop across a current limiting resistor.

The way I see it there are 5 possibilities in this experiment.
1. The LED does not light up.
2. The LED emits light, but at a lower intensity than it could with a higher voltage source and a resistor.
3. The LED emits light at the intensity for which it was designed.
4. The LED is overdriven and emits brightly for a (comparitively) short time before burning out.
5. The LED burns out instantly.

An LED is naturally a device amenable to being current driven, & destroyed when voltage driven. Of course if voltage was zero, there would be no current. But "current driven" does not mean driven only by current w/o any voltage.

The phrase "current driven"refers to a drive scheme where the current is held constant to a reasonable degree, & the voltage is determined by the I-V curve of the diode, as well as temperature. If I use a constant current driver to hold a forward current of 10 mA in an LED, the forward voltage drop, Vf, will attain equilibrium value per the I-V curve. When the device heats up, the parameter "Is", reverse saturation current, increases drastically.

The relation between Vf & If is as follows per Dr. Shockley's diode equation:

a) If = Is*exp((Vf/Vt)-1); and also b) Vf = Vt*ln((If/Is)+1); where Is is strongly temperature dependent, increasing non-linearly with temp.

If Vf is held constant, If is computed per Shockley diode equation a) above, & power is If*Vf. This power heats the junction so that temp rises. But Is rises greatly with temp. Since it is Vf, not If, that is held constant, Vf stays the same, so If increases due to Is increasing. Vt is the thermal voltage = nkT/q, & it increases with temp as well, but being in the denominator of the exponential, it tends to reduce If. But the If increase due to Is increasing swamps the decrease due to Vt increasing.

So If increases while Vf holds constant resulting in power increasing. That raises the temp even more, raising Is more, raising If more, raising temp, raising Is, etc. We have thermal runaway, & the device likely does not survive this process.

But if we hold If constant & allow Vf to be incidental & indirectly determined by temp & I-V curve, observe the following. Shockley equation b) shows that Vf is determined by If (being forced by source), Is, & Vt. Power = If*Vf which results in a temp rise. But a temp rise increases Is drastically while increasing Vt only marginally. Is is in the denominator of the b) equation, vs. numerator of the a) equation.

So Vf is driven down greatly by the Is increase, & driven up moderately by the Vt increase. The net result is a decrease in power & the thermal system reaches a stable equilibrium. Of course, when the temp settles to its equilibrium value, we must be sure it is a safe value. If we biased an LED rated for 20 mA with 100 mA, the junction temp may exceed 150 C & lifetime is shortened. But current drive assures stability thermally. As long as the current value forced in the LED is below the OEM recommended value, the LED lamp should perform reliably for many years.

If no current source is available, only a voltage source is available, then a resistor provides stability. A voltage source forces a constant voltage across a series combination of resistor & LED. As the power produces a temp rise, Is increases. But the current cannot increase due to Is increasing because an increase in If would drop a larger voltage across the resistor resulting in a decrease in Vf. Power = Vf*If, so that both cannot increase. In fact, as soon as temp tends to increase Is, should If increase due to thermal energy, random noise, induced sources from neighboring circuits, the resistor drops an extra forward voltage & Vf decreases immediately. Thermal equilibrium is obtained.

Summary: a device which is "current driven" still needs voltage to operate. The term "current driven" does not imply that voltage is not needed, it merely implies that current must be the forced quantity, while voltage is indirect & incidental. Voltage is all important. If an LED is to be driven at 10 mA If, & the associated Vf is 2.20V per the curve, then the source providing the 10 mA must be able to supply 2.20V as well.

Current driven devices like LED, SCR, triac, bjt, relay coils, all require voltage to work. They just work better if current is the forced or controlled quantity while voltage is permitted to obtain its value through the physics of the device as well as ambient temp.

An LED is a current driven device, meaning that it should be driven by a constant current source, or by a constant voltage source with a proper value of series resistance. In both cases the supplying source of fixed current must be capable of supplying the voltage value per the I-V curve. Although the parts are current driven, we cannot & must not ignore the voltage. Voltage is also important in a current driven device.

Claude
 
  • #7
metiman said:
Some answers just muddy the water by talking about the difference between linear and nonlinear resistance.

Far from "muddying the water" it's the fact that the LED characteristic is non linear that makes it a problematical device to drive efficiently. If the introduction of a complicated idea gives you a problem then you should get to grips with it rather than to skirt round it.
 
  • #8
Let's talk about some ways you could operate an LED without a series resistor.
1. You could power the LED with a high speed, positive pulse train and use an inductor instead of a resistor to limit the current. The brightness of the LED will vary if the voltage, pulse width or frequency of the pulse train varies.

2. Speaking of pulse trains, you could probably also do it purely with pulse width modulation if your pulses were short enough.

The problem with both of these methods is there is no negative feedback. With a series resistor there is. As the LED heats up, the voltage across it drops and it begins to draw more current. With a series resistor, as it draws more current the voltage across the resistor increases which limits the current available to the LED.

There is no such feedback with either of the two non-resistive methods I described. If the LED starts to heat up, neither of those two methods will automatically limit the current. You could add a feedback loop by for instance monitoring the brightness and adjusting your pulse train accordingly, but it's a lot easier to just use a resistor.
 
  • #9
skeptic2 said:
Let's talk about some ways you could operate an LED without a series resistor.
1. You could power the LED with a high speed, positive pulse train and use an inductor instead of a resistor to limit the current. The brightness of the LED will vary if the voltage, pulse width or frequency of the pulse train varies.

2. Speaking of pulse trains, you could probably also do it purely with pulse width modulation if your pulses were short enough.

The problem with both of these methods is there is no negative feedback. With a series resistor there is. As the LED heats up, the voltage across it drops and it begins to draw more current. With a series resistor, as it draws more current the voltage across the resistor increases which limits the current available to the LED.

There is no such feedback with either of the two non-resistive methods I described. If the LED starts to heat up, neither of those two methods will automatically limit the current. You could add a feedback loop by for instance monitoring the brightness and adjusting your pulse train accordingly, but it's a lot easier to just use a resistor.

Using 'pulses' does nothing to compensate for the non linearity if the supply of pulses is low impedance. To make sure of a particular current through the LED, you need a current source. This can be, simply, a relatively high resistance or an active device like a transistor collector.
 
  • #10
Use a resistor!
 
  • #11
Absolutely. Alternatives are only relevant when high powers are involved and high efficiency is needed.
 

What is a current limiting resistor?

A current limiting resistor is a type of resistor that is used in electronic circuits to control the amount of current flowing through the circuit. It is necessary in an LED circuit to prevent the LED from burning out due to excessive current.

Why is a current limiting resistor necessary in an LED circuit?

A current limiting resistor is necessary in an LED circuit because LEDs are a type of diode that have a very low forward voltage drop. This means that they can easily be damaged if too much current is allowed to flow through them. The resistor acts as a barrier to limit the amount of current flowing through the LED, thus protecting it from damage.

What happens if a current limiting resistor is not used in an LED circuit?

If a current limiting resistor is not used in an LED circuit, the LED could potentially draw too much current and burn out. This can also cause damage to other components in the circuit. In some cases, the LED may still light up, but it will not last as long as it would with a current limiting resistor in place.

How do you calculate the value of a current limiting resistor for an LED circuit?

The value of a current limiting resistor for an LED circuit can be calculated using Ohm's law (R=V/I), where R is the resistance, V is the voltage, and I is the current. The value of the resistor should be chosen to limit the current to a safe level for the LED, typically between 10-20mA.

Can I use any value resistor as a current limiting resistor in an LED circuit?

No, it is important to choose the correct value of resistor for an LED circuit. Using a resistor with too high a value will result in the LED not lighting up at all, while using a resistor with too low a value will allow too much current to flow through the LED, potentially damaging it. It is best to use a resistor specifically designed for current limiting in LED circuits.

Similar threads

Replies
23
Views
6K
  • Electrical Engineering
Replies
5
Views
2K
  • Electrical Engineering
Replies
2
Views
1K
  • Electrical Engineering
Replies
26
Views
4K
  • Electrical Engineering
Replies
1
Views
804
  • Electrical Engineering
Replies
26
Views
2K
  • Electrical Engineering
Replies
15
Views
642
  • Electrical Engineering
Replies
5
Views
2K
Replies
2
Views
1K
Replies
93
Views
5K

Back
Top