<snip>
However, I recently learned that PWM is not actually very efficient
with leds: it seems that leds provide more light per watt when run at
constant low currents than when pulsed at a high current.
Your statement here needs clarification. It's not as nuanced
as it probably should be.
PWM is simply a method of using a duty cycle (from 0% to
100%) to adjust the apparent intensity. It is not necessarily
the case that it is less efficient with LEDs. It may be. But
not necessarily.
Let's say you are using PWM to adjust the brightness of a
single LED. Normally, at 100% duty cycle the LED is at
"normal and desired" brightness. Now you use PWM to reduce
this intensity. In this case, quite to the opposite of your
conclusion, PWM is actually a MORE efficient method than
others. In this case, your nominal power required is 100%
when operating at full brightness and will correspond to the
duty cycle % when operating at other brightness levels. So
operating at 10% duty cycle will require 10% of the power.
Other methods would dissipate 90% elsewhere so that 10% would
be dissipated in the LED and would be HORRIBLE, by
comparison. So you would WANT to use PWM, here, to save
power.
But let's say you are using PWM because you are multiplexing
a complex LED display. So, here, let's say you are
multiplexing by a factor of 5 because you have 5 columns (or
rows -- pick your terminology) to operate. In order to
achieve a "nominal 100% brightness" in a column, you must
drive it at 5 times the nominal current. So if the current is
20mA, nominally, you need to run them at 100mA, but at 20%
duty cycle. The other columns will also be operated at 20%
for their nominal brightness level. To adjust their
brightness from 0% to 100%, you would PWM them from 0% to
20%. In this case, because the LED voltage will be higher at
5X the nominal current, the power dissipated will be more
than just 5X nominal. But the brightness is determined by the
current, not the power. So in this case one could argue that
PWM wastes some power that, had the LEDs had individual
drivers and weren't muxed instead, would be less for the same
effect. But you pay this price because of the convenience and
reduced cost.
Keep in mind that as far as human perceptions go, so long as
the repetition rate is high enough that the brain cannot
follow it, the brightness perceived will be based upon the
average (integral) of the incident light flux. It will not
depend upon the pulse intensity. (There are other factors,
such as the spatial size and spatial frequency, position on
or off axis of the eye, and the surrounding intensities
nearby that also affect perception... but let's keep this
focused.) If you reduce the repetition rate, then at some
point the brain starts to perceive the peak pulse and will
show a change in brightness perception, along with it. But
it's also confounded by the fact that the pulse is being
noticed as well and that usually isn't desired. So the best
rule of thumb to stick with is that your average current
value represents the perceived flux.
The only other rule to keep in mind is that human perception
is logarithmic, so halving the average current does NOT
reduce the apparent brightness in half. To maintain a
constant rate of decline in brightness, you must multiply the
duty cycle by a constant factor for each time unit. So, for
example:
100%
50%
25%
12.5%
6.25%
Would yield constant changes in apparent brightness. If you
achieved those values with fixed time intervals between them,
a human would perceive a "smooth" diminuation in brightness
that appeared linear.
Regardless, PWM is by itself not necessarily inefficient. And
even when you must pay a small price in efficiency because
you are multiplexing and have to use higher pulse currents
because of that, it's still better than the alternative of
paying for individual drives for every LED and not terribly
inefficient, anyway. Other alternatives would be either
excessively expensive for very little gain, or simply worse.
PWM, though, provides better linearity
Of what? Again, note that human perception is logarithmic. So
while PWM can easily be controlled linearly, it's not going
to be perceived that way if that is how you use it. In fact,
PWM would be more usable as a brightness control if the
hardware could PWM accurately in a logarithmic way -- it's
actually a pain to use PWM correctly for brightness control
if you care about human perceptions of relative brightness.
It's advantage is that it is just easy to apply, is all.
I guess I should mention something else just to complicate
things (and agree with you.)
If you operate a monochromatic (single) LED at differing
currents (and, consequently, different voltages to achieve
it), then you will get a different wavelength distribution
out of it, too. If you use a spectrophotometer you will see
the shifts. This also means the "color perception" of a
single color LED shifts a little.
So one could argue that keeping the LED current fixed while
PWM'ing to achieve brightness shifts is actually achieving
"stable color" for these reasons (the peak current remains
fixed, just the duty cycle changes.)
But if you are comparing, say, 20mA vs 100mA, then there is
also a color shift in doing so. So I take your point that PWM
achieves "stable color."
But even if I want to use
adjustable current instead of PWM, the current mirror still works:
just attach a voltage-controlled current source to the mirror. So I
think I'm going to stick with that one.
It's what I'd probably do (or use an IC, if I could find one
that is likely to exist for a while, is readily available,
could handle the dissipation, and is cheap enough.)
Jon