BSN*: Did you get any help from ATI or NVIDIA when you developed your code?
Gipsel: No. I just use the public available documentation and tools that everyone can download from their website without getting any special support.
BSN*: What was the reaction of the Milkyway@Home administration after you released your client?
Gipsel: The very first reactions were not very encouraging. Actually, before my project, there was an optimized application used privately by the well-known guy “Crunch3r”. That application actually sparked my interest in the whole thing. At that time the project staff didn’t react in the best possible way and it appeared to me they handled it like a threat. But I guess after a while they were simply convinced by the new possibilities that open up with the massively increased throughput. So they are now much more open and cooperative. If all goes well, it should become possible to distribute the ATI GPU application as a stock application by the project itself.
BSN*: Are you planning to release a NVIDIA client as well? Why not?
Gipsel: Not at the moment and there are several reasons for that. First of all, the ATI application still needs some polishing like multiple GPU or Linux support. Furthermore, the project itself is working with nVidia on CUDA-powered version. Apparently, nVidia gives a lot of support to BOINC projects that want to port their applications to CUDA. Together with the mature CUDA SDK, it shouldn’t take long until MW@H also gets a GPU application that supports the latest nVidia cards.
The reason I started with ATI in the first instance was the quite massive performance advantage ATI has on current hardware for the kind of calculations done at Milkyway [Dual Precision format - Ed.]. I hope, it will increase the interest of getting GPGPU applications ported also to ATI hardware, which is in a lot of cases at least as capable as comparable nVidia offerings. The fact that I’m a member of Team Planet3DNow!, a BOINC team associated with an AMD oriented website, has no influence whatsoever.
BSN*: What do you recommend to other distributed computing projects? ATI or Nvidia?
Gipsel: I would recommend support both ;) Without going to much into the details there are different advantages to both. Basically, one can use a very simple high level programming model for ATI that may be enough for simple problems. If not, one has to resort to harder to program low-level approaches, but gets very solid performance in return.
If you need to use a lot of double precision calculations, there is simply no way around ATI from a performance standpoint, at least with current hardware. On the other hand, Nvidia has created quite a mature environment with CUDA, enabling relatively easy creation of high performing GPU applications. From what I hear they offer also great support to BOINC projects. But we should overcome the need to create two version of a GPGPU application with the advent of OpenCL that will get support by both [AMD & Nvidia - Ed.] as well as Intel. Actually, OpenCL has a lot of resemblance to CUDA.
BSN*: What do you think about GPU computing?
Gipsel: That is almost the same as asking “what do you think of multi core?” only taken one or two steps forward. GPU Computing opens up great possibilities. It offers an increase of the performance and also the performance per Watt by the order of magnitude or even more. Realistically speaking, GPU Computing is currently limited to a small range of applications - for the time being.
One has to keep in mind that not all problems can be easily ported to it. Developers actually need to implement a lot more parallelism than for conventional multithreaded application. And we all know how long it took (still takes!) that mainstream applications such as games really make a use of dual or quad core CPUs. And now think you should program not for four threads but for some thousands or even a million threads!
The answer to the question from the beginning of this article, “Who has the best GPU Computing chip out there?” is (partially) answered. In case of the BOINC project Milkyway@Home, the answer is the ATI Radeon 4800 series.
However, it is clear that GPU Computing has a mountain to climb, because it has to overcome the programming model itself. Old-school programmers weren’t educated to think in parallel, while the new generation of programmers is. It will take some time, but as programmers mature and experiment, we will enter a whole new era of computing.
What Gipsel did is nothing short of amazing, yet clearly proving that optimization is a key to a good application [you lazy scoundrels at Rockstar, are you taking a note for your amateur GTA4 conversion? - Ed.] - Milky Way project was accelerated by 100 times on a CPU alone, and then GPU accelerated the original code by 10,000 times. If a 10K performance increase isn’t mind blowing, we don’t know what is.
We would like to thank Andreas Przystawik aka. Gipsel of Planet3DNow! fame for his time and to congratulate him on his efforts in the world of Distributed Computing. At this time, we don’t know what features will appear in the next generation of GPUs, but you can be sure that the staff here at BSN* will keep you updated each and every step of the way.
Please also take a look at our upcoming follow-up story where we compare all versions of the Milkyway@Home client on various workunits. There we’ll point out the full numbers to the speed increase mentioned in this interview.
© 2009 - 2014 Bright Side Of News*, All rights reserved.