I’m sure many of you know this, but for those who don’t…
“( LED || LED ) + R” should never be done. It relies on an exactly matched forward voltage drop for each LED, and even a tiny difference will significantly bias most of the current to the LED with the smallest fV, thereby heating it more than the other which reduces it’s fV even more. Thermal runaway. Same reason why you can’t use normal Diodes in parallel.
The advantage to having LEDs in series is obvious. Each LED drops it’s fV and the remaining voltage is dropped by the resistor. The higher the voltage dropped by the resistor, the more power is wasted by it as heat. So the closer you can get to the supply voltage with your series fV’s, the less power is lost in the resistor for any given LED current.
EG1:
12v supply and 3 white power LEDs, each needing 300mA with a fV of 3.5v. (1 Watt LEDs)
Using LED1+R1||LED2+R2||LED3+R3 setup, each individual LED+R circuit will use 300mA and will have 3.5v dropped across the LED and 8.5v dropped across the R. That’s (3.5v*300mA) = 1.05 Watts dissipated by the LED and (8.5v*300mA) = 2.55 Watts dissipated by the resistor. And for all three parallel circuits that’s a total of 10.8 Watts of power required to get 3 Watts of light. An efficiency of around 27%
EG2:
Same supply and LEDs. This time with the LEDs in series.
Using LED1+LED2+LED3+R setup, each LED drops 3.5v@300mA, a total of 10.5v. The R drops what’s left over, 1.5v. So (10.5v*300mA) = 3.15 Watts dissipated in the LEDs and (1.5v*300mA) = 0.45 Watts dissipated in R. A total of 3.6 Watts power required for 3 Watts of light. An efficiency of around 83%
Series is always significantly better.