While all the mainstream media is spreading around the panic about Influenza A type, i.e. H1N1 – researchers are desperate to accelerate the cell structure visualization. Thanks to a post on nTersect blog, we found out about the presentation made by researches Klaus Schulten and John Stone from the University of Illinois demonstrated how they use nVidia’s GPGPU boards for their NAMD/VMD research software.

Klaus Schulten, Swanlund Professor of Physics, University of Illinois at Urbana-ChampaignKlaus Schulten is the Swanlund Professor of Physics [also serves as the Director at NIH Resource for Biomolecular Modeling and Bioinformatics; Director at Center for the Physics and Living Cells] at the University of Illinois at Urbana-Champaign.

Schulten states "When we use systems with GPUs running NAMD and VMD software, this speed is accelerated and we can do simulations of cells. Important biomedical cellular research problems can be solved by the acceleration offered by GPU chips. With NVIDIA GPUs, our calculations can be done between 200 – 400 times faster on the GPU." We all know the speed ups achieved when deploying GPGPU technology, but the fact of the matter is that we need much more computational power than before, especially when we take a look at the limits of the current computation infrastructure.

The University of Illinois runs NAMD and VMD research software for simulation and visualization of cell structures in order to focus on cell disruption – yes, this is how a virus cell reacts to drug treatment and what may be happening in relation to drug resistance. Long story short – they’re trying to predict how the H1N1 will evolve so that we could know what to do for future pandemic outbursts of the Influenza A virus.

Besides that, Stone discussed how theoretical scientists started to apply "emergency computing" to help address a real world problem – and for that, vast amounts of computational power are required. GPGPU technology seems to be the way forward for University of Illinois and quite honestly, we’re not surprised. Universities are always the first to jump on the bandwagon – back in 1990s, while proprietary RISC architectures were at rage, we heard of universities starting to adopt x86-based servers and network them prior to any x86 racks making an appearance. I remember walking in a "server room" that had DEC Alpha’s in racks and ton of cheap cases [well, they looked cheap – they were $150 a pop] with AMD K6-IIs all networked thanks to Beowulf Linux cluster. Today, the x86 architecture dominates the HPC space, with Intel taking 80% of all systems alone.

This is no marketing talk – back in May 2008, the author of these lines spoke with Vijay Pande of Pande Group [one of their projects is Folding@home] at Stanford University about the need for more efficient way for researching protein life. The problem then was that the world’s most efficient GPU for Folding@home was the GeForce GTX 280,which was capable of simulating 600 nanoseconds of protein folding in 24 hours.

Today, in 24 hours of non-stop running, Dual-GPU GeForce GTX 295’s are capable of running a protein folding simulation at 1400 nanoseconds, or 1.4 milliseconds. That means that unfortunately, we need 714 GTX295’s to simulate a single second of protein folding. If you want to simulate 24 hours, you need 61,689,600 GeForce GTX 295 graphics cards, i.e.123,379,200 GT200-class graphics processors. Before you ask – yes, that number is higher than the overall number of every manufactured GT200 chip in the world. And that was "just" for Folding@home. At that time,a single CPU core was able to process around 4 [yes, four] nanoseconds,i.e. 150 times slower than the GeForce GTX 280, yet alone newer GPUs.For 24 hours of protein simulation you need 123 million GT200 GPUs or18,506,880,000 Core 2 CPUs. Yes, that’s 123 million versus 18 billion.

Folding@home was a 150x improvement. In case of NAMD and VMD software,the achieved acceleration was in the range between 200 and 400x.