ECC memory is nice, but can we get something faster?
Now, I was curious whether Super Micro will block me from running the standard Core i7 DDR3 DIMMs, like the fast DDR3-2000 3-channel kit from Kingston. After all, these are very fast modules - CL8 at DDR3-2000 being achievable on ASUS Rampage Extreme with the newest BIOS - but they are the unbuffered, non-ECC desktop kind. So, I replaced the entire DRAM with six HyperX modules, one per each CPU's memory channel. Guess what, they worked! At only DDR3-1333 speed though, as the BIOS option for "Forced DDR3-1600" didn't seem to take effect.
Can you imagine, if this memory would work at its native speed of 2 GT/s, this system would have 96GB/s of bandwidth for CPUs alone?
Nevertheless, as you see on the photo, this big baby can take your favorite Core i7 desktop memory and spread it nicely across no less than six channels! It will be lovely to compare once the benchmark time comes next week. In theory, this system should give you just a little below 64 GB/s of memory bandwidth. This is more available memory bandwidth than most low-end and mainstream graphics cards!
Super Micro's design calls... good or bad?
The Tylersburg 36D chipset IO Hub [NorthBridge chip, Ed.] is the dual-QPI sister of the X58 desktop one, the only major difference being the second QPI channel to enable talking to either two CPUs at the same time, or one of the CPU and - get this - another Tylersburg bridge for dual IOH configuration with, say, four independent PCIe x16 paths. This board has only one IOH, therefore we're limited to two PCIe Gen2 x16 and one PCIe Gen1 x4 slot. Why isn't the x4 slot running at Gen2 speed, since the IOH supports it? Well, Super Micro, in a questionable decision allocated the IOH's Gen2 x4 lanes to an optional on-board SAS controller chip from LSI, which our board doesn't have - you need X8DA3 motherboard flavor to have it. And, the x4 slot lanes come instead from the ICH9 SouthBridge chip, which only supports PCIe Gen1 speed and is limited by the ICH-to-IOH connection bandwidth.
Now, if you use a higher end SAS RAID controller with local processor and cache, for say, your SSD array, the extra double bandwidth of the PCIe Gen2 would come in handy. So, the IOH Gen2 lanes should have been brought to that empty slot instead, and the optional on-board SAS relegated to the ICH PCIe. I've added Intel's own SAS RAID controller here with their kind help, and we will see how much the Gen1 speed limits it when using a quad SSD RAID0 array. The other interfaces - dual Gigabit Ethernet, on board SATA and USB ports, integrated audio, plus two legacy serial ports and, luckily, PS/2 keyboard and mouse connectors - round up the I/O. Nothing overly exciting there in regards to interfaces.
The system is equipped with two 800W PSUs [Power Supply Units], together supplying enough power to feed everything, including two of the fastest graphics cards you could imagine for 2009. With one PSU alone, as I tested, it couldn't even feed an Asus HD4870X2 triple-fan "Harley-lookalike" card together with the rest of the components. Two PSUs, though, do it fine.
Super Micro's beta BIOS probably needs more work...
What could Super Micro improve here on the board level, before we go into the BIOS? First off, the BIOS showed pretty unusual CPU temp readings above the 60C level, either Super Micro's temperature sensors need to be checked, or the heat sinks need to be replaced. Since the casing is fairly spacious, a heat sink replacement with higher-end units from the desktop LGA1366 market is a sure option.
Secondly, the PCIe slot layout: a 7-slot configuration, with PCIe slots configured as two x16 and two x4 (one Gen1 and another Gen2 for the latter), plus a x1 PCIe slot for more proper audio instead of that on-board software, finally rounded off by two spare PCI and/or PCI-X slots on the side, which would use the chipset resources better and provide for more flexible expansion.
Third, as the IOH North Bridge does heat up quite a bit, replacing that thin aluminum heatsink, or at least providing easy mounting option for a slim local fan to take care of it. This could be accomplished without being blocked by, say, a long graphics card.
Then, of course, some board real estate could be saved by using Intel's 82576 dual-port GbE controller with quite a decent amount of TCP/IP offload, instead to the two 82573V chips being used now. Yes, the 82576 is more expensive, but you also save a PCIe lane and valuable board space.
12 DIMMs can take up to 192GB of DDR3-1333 memory, provided that your pockets are deep enough...
Nevertheless, it's quite an impressive board feature-wise. Asus and few others do claim to have even more impressive or simply faster Nehalem-EP workstation motherboards, but we'll leave that opinion until we actually test those new boards too. We'll follow up with article that will thoroughly explore BIOS options...
UPDATE, March 30th, 2009 01:26 UTC - We have published a follow-up containing more details about this exciting new platform. You can find Nehalem-EP Workstation Preview Part II if you click here.
UPDATE, March 31st, 2009 22:58 UTC - Third part of the review, containing various benchmarks is published. You can find our Nehalem-EP Part III: Benchmarks if you click on this link.
UPDATE, April 1st, 2009 15:58 UTC - Due to your demand, we decided to run a short video containing an inside view into the beast. You can view the video below:
© 2009 - 2014 Bright Side Of News*, All rights reserved.