Solid-State Drives are the next big thing in computing. Why do I say that? Well the reasons are pretty self-explanatory. But you might not get why I am saying they are "the next big thing" after all pretty much everyone has one now. I mean it is to the point where I get more PRs about new SSDs coming out than just about any other product.

Why are Solid-State drives so popular and what do they really represent?

On the surface these are easy questions to answer. An SSD is simply faster than a traditional magnetic media like a platter based HDD. An SSD runs cooler than an HDD with a motor spinning at 7200 RPM or faster. They use less power, and the list goes on. But there are issues with SSDs as well. Flash Memory has a certain life cycle when it comes to writes. After so many writes, well you can?t write to it anymore. This is combatted by a table that ensures that data is written evenly across all of the cells. But this handy feature can also be a downfall, until recently there was no way to properly defragment SSDs. You see your operating system could only access the write table, not the actual information written to the cells. So when you defragment the drive you were only defragmenting the write table.

For the most part this has been corrected and the performance hits that people saw over extended use.

However the implementation of this auto correction is different between just about all manufacturers. There is no set standard, which is the biggest problem with SSDs today. There is no set standard for how they are built or how they work. Everyone is free to make them operate how they like and to incorporate their own features for how data is written to the cells and presented to the OS. This makes writing drivers for drive controllers difficult at best. What works well with one type of design might not with another.

This brings us to the state of SATA controllers. We have heard about SATA 6G and are beginning to see these controllers hit the market. But as with the drives themselves the implementations from each company are going to be different.

There is one thing that is recommended across the board for proper testing SATA 6G performance. This is to disable C-States in your BIOS. Why is this needed to test performance? Well according to the information we have it has to do with the way many synthetic tests and benchmarks operate. During normal operation C-States should be enabled to access many "advanced features of the CPU." What does this mean to you? It means that performance numbers soon to be hitting the internet will be skewed a little by disabling this in order to show higher numbers. This is a tiny bit misleading unless they show performance in both modes for all SATA 6G implementations.

Intel Lynnfield or did you know P55 chipset only has PCI-e 1.0?
Our next area of concern is how the SATA 6G controller fits into the system. There is going to be a right way and a wrong way. In some cases we are hearing that the new controller will actually pull PCI-e lanes from the CPU in Lynnfield/P55 systems. This will make enabling SLI or Crossfire difficult and may reduce single card setups to x8 instead of x16. If the information we have is accurate then a SATA 6G controller will need at least 1 PCIe Gen 2.0 lanes to operate properly [2-4 would be better but that is mostly for RAID performance]. Using a single x1 connection is simply not going to cut it for the majority of transfers. We saw this as an issue on the ASRock P55 Deluxe. The included SATA 6G controller would not work right in the single PCIe x1 slot. Instead it needed to be in one of the PCIe Gen 2.0 slots. These pull directly from the CPU as mentioned above on the P55/Lynnfield platform. This is not a good thing and in our opinion is the wrong way to implement the new controller. The other would be to find a way to utilize multiple [existing] PCIe Gen 1.0a lanes [say 4] for the controller to use. These should be available in the P55 Chipset and would not have to pull any from the CPU for use.  This would be the better way to implement them right now.

Of course the question needs to be asked, if this is true for P55/Lynnfield, how will it work on X58/Bloomfield? After all there are no PCI-e Lanes in the Bloomfield CPU to pull from. This could mean that the SATA 6G controller implementation will be the same across all manufacturers, but again probably not. Still as the X58 chipset still has the PCI-e Gen 2 lanes inside it should be easier to allocate the needed lanes on an X58 motherboard.

The above problems are one of the reasons why SATA 6G and even USB 3.0 will take time to integrate into the P55 and X58 chipsets. After all where do you pull the PCI-e lanes from in the P55?

SATA 3.0 or SAS 6G are not the magic wand for SSD performance
So what does all this have to do with SSDs? To put it simply it serves to illustrate a point that there is no set standard as of this writing the controllers in most SSDs are only capable of SATA II [3Gbps] transfer. As SATA and SAS 6G become more prevalent SSDs will need to improve their controllers to take advantage of the available bandwidth. Speaking with one engineer he flat out told us that current SSD controllers are not going to operate any faster one a SATA 6G controller than  they do on a SATA II. Just like traditional magnetic storage, they are limited by their internal designs.  We all have been hearing how SATA [and SAS] 6G is going to unlock the potential of the SSD. While this is true, it is not true for current SSD owners, you won?t see that until the next generation. SATA and SAS 6G unlock the future potential for both the HDD and the SSD. The current ones are still locked firmly in the present day.

Yet as with all things in the world of marketing we are influenced to want this now and to expect improvements with our current drives. Sadly until the current drive/controller model is changed drive to system transfer will still be a major bottle neck in system performance.  

We have both solutions in house and are preparing a test of them with multiple drives [SSD and HDD] to see how they fare under synthetic and real world testing loads. We will take a look at the differences in C-States settings [Enabled and Disabled] as well as compare them to enterprise class 15k RPM SAS drives on an independent controller [provided by LSI]. It will be interesting to see what we find and how performance can be maximized for bot synthetic and real world use.