We have already disclosed nVidia GT300 GPU [or Fermi CUDA architecture, depends what naming convention you like the most] power consumption for the consumer card, but it looks like 6GB of GDDR5 memory is too hard of a cookie to fit inside the 225W limit.

As we all know, power comes to the board in 75 or 150W package: an x8 electrical PCI Express slot is capable of providing 75W of power [same goes for the x16 one], a single 6-pin PEG power connector will give you additional 75W and an 8-pin PEG [PCI Express Graphics] connector will give you 150W to play with.

In case of upcoming high-memory configurations nVidia Tesla, Quadro and GeForce cards, the company had to install a 6-pin and an 8-pin connector, getting 300W of power to play with. However, this was a precautionary measure. According to information we have at hand, the GT300 board [yeah, featuring "Fermi" CUDA architecture] barely missed 225W cut-off for the 6+6 pin if the board comes with 6GB of GDDR5 memory.

nVidia could have gotten 6+6-pin configuration and still ship 6GB version, but the margin of efficiency would be just too low [even with digital PWM, boards cannot be 100% power efficient] to qualify inside OEM systems. The decision was thus made and the 6GB cards come with 300W of available power.

If you’re an overclocker, start preparing champagne – you will have around 50W to play with on a board with 6GB of GDDR5, but on lower density boards [if an 8+6-pin configuration is kept], you’ll have around 60W or maybe even more, depending how secure nVidia wants to be. Given the memory controller inside nVidia GT300, you can really go as wild as you can on the ATI Radeon HD 5870 cards, meaning you could overclock the memory by as high as 40% on the clock. In case of GPUs [from both ATI and nVidia], who needs L3 cache if your system memory gives you anywhere between 153-179.2GB/s [5870 and 5870 OC] and 211.1-268.8GB/s [GT300 stock and potential OC].
 
Bear in mind that consumer boards have to withstand much higher temperatures than Quadro and Tesla cards, since commercial cards are delivered inside designed cases, while consumer GeForce boards have to work "with everything inside everything", as our source told us.