History Revolvo Ipsum - History repeats itself. Just like x86 processors demolished proprietary CPU architectures in the world of high performance computing [HPC] - it looks like GPUs are repeating that cycle less than 10 years after.
During Siggraph 2009, Chaos Group
demonstrated its GPU-update for a very popular ray trace renderer - V-Ray RT
. Even though V-Ray RT was released just two months ago, programmers at Chaos Group are in the advanced stages of developing its GPU renderer that promises to match the quality of a CPU-based solution. Its first performance demonstrations show that in the simplest demonstrations, a single GeForce GTX 285 1GB card is as much as 1000x faster than Core i7 2.93 GHz with all eight threads active.
You've read it correctly -, not hundred times, but one thousand times faster. In a simple scene with 60 triangles, a CPU took 25 seconds to render a single frame at 640x480 pixels [standard VGA resolution], or 2.4 frames per minute. GeForce GTX 285 will render 2400 frames in a single minute [40fps], e.g. 1000 times more than a CPU could.
But that was a simple scene. The programming minds behind Chaos Group also demonstrated the Roman Coliseum, a scene containing out of more than 800,000 triangles with five bounces of Global Illumination. It will take two minutes for Core i7 CPU to render a single frame, while the GTX285 will work in around 5-7 fps, with bursts of up to 13 frames per second. In one hour of rendering, the CPU raytracer will churn out 30 frames. If you would go and buy the best rendering CPU you can buy right now, Intel's Xeon 5580 at 3.33 GHz, you could output around 40 frames in one hour, e.g. just over second and a half of movie time.
Calculating the GPU figures offers a pretty valid explanation why AMD bought ATI for 5.4 billion dollars and why Intel is spending more than three billion dollars to create Larrabee. A single GeForce GTX285 1GB will output 21,600 frames in a single hour, equal to 864 seconds of movie [again, in VGA resolution].
Don't forget, we're talking about VGA resolution. Putting the conversion in cinema resolution [4K e.g. 3840x2160], Core i7 2.93 GHz will take 54 minutes to render a single 4K frame, while GPU would render 720 frames or 28.8 seconds of animation. The cost reduction implications are immense: from one aspect, one GeForce GTX285 card would consume much less power in order to render the whole movie, thus you're getting more "green" credentials and yet, have time to do more.
In an absolutely worst-case scenario, a closed room with no interior lighting, GPU was around 20 times faster than the CPU. At the present time, V-Ray RT GPU renderer uses CUDA, but the developer is playing around with a conversion to OpenCL. Given the similar nature between the two, the developer sees no major issues in the conversion, but that is not the only thing on the cards.
Just like its CPU-based renderer, GPU renderer will support distributed rendering on multiple GPU units. No specific number was given, but a single workstation usually can support up to four GPUs on a single-GPU board, or even up to eight GPUs if for instance, GeForce GTX295 could be supported in hardware.
Given the price of this application of mere 249 Euro or USD [during promotion period] or the full retail price of 499 Euro or USD [depending on your location], V-Ray RT for Autodesk 3ds MAX and the upcoming V-Ray RT for Maya should be a no-miss if you're using ray-tracing technologies. At present time, we are evaluating V-Ray RT and if the application satisfies our demands, we will be including it in our standard benchmark suite.
You can download the Siggraph 2009 presentation on Spot 3D [Siggraph 2009: GPU-accelerated V-Ray RT demonstration
- QuickTime file download], or watch a shorter video over at CG Architect
© 2009 - 2014 Bright Side Of News*, All rights reserved.