No more 16-pin 12VHPWR issues.
Or we could just return to GPU series where consumer models don’t even go anywhere near that 600W figure and simply use one or two 8-pin PCIe power connectors and call it a day.
Consuming this much power for playing a game (and yes, that’s what most people use these cards for) is just silly.
We went through a phase of high-power processors years ago with the Pentium 4 series and its offshoots, then things started getting more efficient instead. I wonder if we will soon see the same for GPUs or if we’ll be stuck on a high-power plateau for a long time.
I don’t see any route to more efficiency. The Pentiums were architecturally “broken” and a step back by Intel made the Core models more efficient. There’s not really a way to do that for Nvidia.
People keep buying the things, so there is no incentive go small.
It’s really time to move to 48V instead of 12V. These things are demanding too much power.
Changing to 48V allows you to use thinner wires and smaller connectors but it comes at several costs:
- Most low-resistance (RDson) mosfets used in buck converters are only rated to 20V or so. The higher voltage rated ones tend to have higher resistance which would cause more heat losses in your buck converter. At a few hundred watts output even a few % loss in efficiency is a lot of power, so you will either need to pay more money in cooling+heatsinking of the converter or more on fancier fets.
- A faulty connector will arc a lot more badly with 48V than 12V. The arc will strike more easily, sustain more easily and dump much more heat (vaguely 16x as much if you use an equivalent-ohmic estimate, but I suspect plasma physics doesn’t work that way).
- Fuses for 48V are more expensive than 12V. Arcs/plasma inside fuses is harder to quench with higher voltages (because of the higher potential transient fault current, but also possibly because of different electrochemistry+plasma at the higher voltages?).
- Bucking 48V to the 1.1V or so that GPU’s and CPU’s use is a ratio of almost 50:1. That means the buck converters need to use a very low duty cycle (which causes some technical problems with mosfet driving & controller choice) or you need to move to using a transformer instead of an inductor. That’s all doable and kind of cool, but might (?) have extra costs involved.
If we ignore the industry transition costs then things might still end up more expensive at the end of it. Not sure, I work mostly at lower voltages to avoid these problems.
The future lies with 3-phase-480VAC-directly-into-card via the PCIE slots >:)
How many boxes of components do you have at home?
Trick question, my tubes of mosfets are too long to fit in my boxes.
(Many, many boxes. I have shelves on 3 of the 4 walls of my room now, I’d do the 4th but a window is in the way)
… but now we need a connector between the PSU and the motherboard that can do the 50amps (12*50=600W) or more (multiple PCIE cards).
Ring terminals with screws might be good, but they are too easily to install incorrectly (loose or wrong way around) and don’t have a connector shell that prevents them from self-shorting.
If it doesn’t cause stuff to melt, that’s a win in my book…