- #1
metiman
- 87
- 3
Let's say that I have an LED where If=20mA and Vf(typ)=2v. I want to power it directly with a 2v lead acid battery. According to most answers to this question I have read this will either not work (with the outcome unspecified), will destroy the LED, or is just a bad idea because of how an ideal LED is supposed to behave even though the element in question is not an ideal LED. Oh. Let me add that for the purposes of this discussion the LED in question is a real world non-ideal device.
Some answers just muddy the water by talking about the difference between linear and nonlinear resistance. The fact that V/I is nonlinear for an LED and that small changes in voltage across it can result in a large difference in current through it would not seem to explain why extra voltage would be required in this example so it could be turned into heat by another V/I element in the circuit. All that means is you have to be careful not to exceed Vf(max). In this example that seems pretty much guaranteed.
Some explanations go into how an LED is a device that is controlled by current. Not by voltage. But I am guessing that I won't get any current through the LED if I don't put any voltage across it. I also suspect that the current through it will vary at least somewhat predictably according to the V/I curve provided by the manufacturer. Increase the voltage across the device and I bet the current through it will increase pretty much as the manufacturer describes.
That's another reason that is sometimes given. Individual variance between parts. But you could always test each LED yourself measuring the current through the device for a series of different voltages to derive your own V/I curve. Or just use the worst case from testing several parts and make sure that your source voltage never exceeds whatever you determine the real Vf(max) to be.
I am willing to accept it is a bad idea, but I am interested in the real reason why even when your source voltage just happens to be exactly equal to Vf you can't use the source because it does not allow for a voltage drop across a current limiting resistor.
The way I see it there are 5 possibilities in this experiment.
1. The LED does not light up.
2. The LED emits light, but at a lower intensity than it could with a higher voltage source and a resistor.
3. The LED emits light at the intensity for which it was designed.
4. The LED is overdriven and emits brightly for a (comparitively) short time before burning out.
5. The LED burns out instantly.
Some answers just muddy the water by talking about the difference between linear and nonlinear resistance. The fact that V/I is nonlinear for an LED and that small changes in voltage across it can result in a large difference in current through it would not seem to explain why extra voltage would be required in this example so it could be turned into heat by another V/I element in the circuit. All that means is you have to be careful not to exceed Vf(max). In this example that seems pretty much guaranteed.
Some explanations go into how an LED is a device that is controlled by current. Not by voltage. But I am guessing that I won't get any current through the LED if I don't put any voltage across it. I also suspect that the current through it will vary at least somewhat predictably according to the V/I curve provided by the manufacturer. Increase the voltage across the device and I bet the current through it will increase pretty much as the manufacturer describes.
That's another reason that is sometimes given. Individual variance between parts. But you could always test each LED yourself measuring the current through the device for a series of different voltages to derive your own V/I curve. Or just use the worst case from testing several parts and make sure that your source voltage never exceeds whatever you determine the real Vf(max) to be.
I am willing to accept it is a bad idea, but I am interested in the real reason why even when your source voltage just happens to be exactly equal to Vf you can't use the source because it does not allow for a voltage drop across a current limiting resistor.
The way I see it there are 5 possibilities in this experiment.
1. The LED does not light up.
2. The LED emits light, but at a lower intensity than it could with a higher voltage source and a resistor.
3. The LED emits light at the intensity for which it was designed.
4. The LED is overdriven and emits brightly for a (comparitively) short time before burning out.
5. The LED burns out instantly.