Today, at Nvidia’s GPU Technology Conference in San Jose, California, Ross Walker of the San Diego Supercomputing Center spoke about the benefits of going with a Kepler GPU in an AMBER simulation.

AMBER 12, is the latest iteration of the AMBER (Assisted Model Building with Energy Refinement) which is designed to be a method of solving molecular dynamics problems. What Ross and his team did was to actually take the latest Nvidia GPU, the GeForce GTX Titan, which we recently reviewed, and pegged it against the rest of the field.

Here, Ross shows us the previous state of GPU computing with all of the latest desktop and server GPUs prior to the GTX Titan’s release. Do note, that the GTX 680 is in fact the slowest card of them all.

Moving on to the second slide, we see a comparison benchmark of all of the above GPUs against a CPU in implicit solvent performance. The benchmark itself is simply three different tests all measuring how many ns/day each processing unit can yield in a single day. Note that the 8-core/16-thread Intel Xeon E5-2670 got completely crushed by ALL of the GPUs.

Here is an explicit benchmark which compares all of the previous generations of GPUs as well as CPUs that are running on UCSD’s SDSC Gordon supercomputer which features some of the fastest storage in the world. Looking at the graph above, you can see why Ross was so happy about the GeForce GTX Titan. One GTX Titan did the work of almost four GTX 680s and more than passed the performance of two K20X GPUs.

In this benchmark, we see more of the same, except this time the GTX Titan is without a doubt the fastest solution and the fastest single processor solution. The fact that it only costs $1000 is mind boggling to many of these research institutes that are usually used to spending thousands of dollars per GPU.

Ross also mentioned that in some of their AMBER simulations did not actually need any ECC, which meant that the accuracy of the calculations and algorithm were that good that errors did not affect the end result. However, he did mention that some data corruption would occur later on, but would not actually affect the numbers of the results. If true, this could be a big deal for the rest of GPGPU programmers forced to buy expensive TESLA GPUs for their ECC capability and HyperQ .

Finally, Ross showed a slide that indicated the trajectory of GPU and CPU performance in AMBER and as a result almost left a call to action saying that which one would you rather be pouring your effort into? As we can clearly see here, GeForce GTX Titan is a complete and utter beast and likely could be cutting into their TESLA business, if people could actually get ahold of a Titan to begin with.