It is not by accident that Nvidia came up with the Maximus name for their technology utilizing the Quadro and Tesla technology combined into one. As we mentioned in our earlier article on GPU rendering, we are not interested in delivering techno mumbo-jumbo, things like numbers for frequencies, wattages and so on, but rather seeing how certain technologies affects the real life and every day production from small/medium video studio, this will be our guiding light in all of our following articles.

As the title suggests, this time we are happy to let you know, from first hand, how the so called Maximus technology can help you out in delivering you Digital Content, from concept stage, development and into the light of the day as a final product.

Is it as great as they market it? Is it worth the money it costs? Well, as everything in life, the answers are not as black and white as you will see further down the line.

As a small studio we have to think more creatively and be as efficient as we can to stay competitive on the market with the "big sharks". As many of you, we stumbled on "the big question", whether or not we should opt for a Quadro and Tesla cards, or go with the cheaper GTX solution. We were also thinking was it worth the money and will we get bang for the buck as one would expect with the professional technology. Now, let us share with you one secret we came by on our own.

GTX vs. Maximus ? Eye to eye

Now, we were fortunate enough to have great friends at Nvidia and EVGA that provided us with the hardware so we could find optimal solution for you to consider, but as always BSN is here only to provide you with our experience with no embellishments and confusing numbers, and at the end it’s up to you to make sense of our findings.

We have done some testing with the 4 GTX 680 and have built a monster machine for GPU rendering a year or so ago. We weren’t blown away as we were hoping we would be, but since then, many things have changed. A new series of GTX cards have been released, called Titan ( GTX Titan and GTX 780) by Nvidia, as well as some of the software we were testing our rig got a major update release.

Back then, we were considering whether or not a GTX rig would suffice for serious 3d/graphic design/video production. At that time our AMD Firepro cards, held very well against gaming rig we came up with. Mainly because we were looking for a rig that could do all of the jobs we need in our everyday production pipeline. So, running 3ds max with Firepro gave us decent results in a sense of viewport refreshment but on the other hand we had some problems with the compatibility with the plugins which utilized CUDA technology. Our Adobe Suite that is installed on the same machine performed better with the GTX cards than on a Firepro (well after altering the txt file and adding the card onto the list of supported cards for Mercury Player), but the 3ds max was so slow on GTX cards that it almost stopped, and don’t let anyone tell you 3ds max is running great on gaming cards if you are doing serious stuff, because it does not! This was a situation where on one hand we had this great compute power at our disposal and on the other we had this semi useful machine that was consuming so much power that it made no sense to have studio configured around such machines, for workstations that is.

That was the moment we started looking for a better solution to further push our efficiency. And what we came by was something that works great, again we were not 100 percent happy but it was optimum for our needs. We configured one of the rigs with the Maximus technology and the other one with the 4 GTX 680s.

Now, what is Maximus? Maximus is a setup of two professional graphic cards from Nvidia. One from Quadro series of cards, and one from Tesla. We had Quadro K5000 and Tesla K20c, which makes it a second generation of Maximus. The K5000 is usually used for driving the display and view ports while Tesla is dedicated for heavy duty computational tasks.

With this solution we overcame many of the initial problems we had with the previous setup. Now artist scan use a very stable, confident and heavy duty machine for content creation, utilizing Maximus technology, everything is running as it should. 3ds max is blasting fast, viewport refreshment vise, all of our CUDA based plugins are running nicely, even our Da Vinci Resolve, grading software is incredibly responsive, thanks to Maximus. From this point on everything is looking much better. Our Adobe Suite is flying, Premiere Pro CS5.5.2 gave us incredible performance (and after ?hacking? the card list for Premiere CS6 even better), we were able to playback 5 full HD feeds one on top of another in split screen mode, with the special effects filters applied to it, with blending mode, and everything ran in real time (In Adobe Premiere CS 6 we were able to place 20 instances of HD footage, on top of 2.5 K Black Magic Camera footage, and again everything ran smoothly. Tesla card usage wasn?t exceeding 65%), and this is where Maximus excels, in Premiere Pro. Not to mention that the workload was taken away from the system so we could do other things simultaneously.

Another benefit would be if you wanted to harness the full power of Adobe After Effects new ray tracing capabilities and work with 3d text, shapes and animation in real time, you would want to sit behind the Maximus technology, and trust us when we say Maximus is not something they just say it works, it actually works as it should. Adobe had to be working with the Nvidia to get this thing done, and they did a great job we might add. But to be totally honest, nothing else in AE was made to run more faster on GPU, and when you consider the quality of 3d text, shapes and so on, that you get with so much celebrated ray tracing engine in AE CS6, is not that great, AE didn?t benefit much from Maximus technology (With nicely put headlines on Nvidia?s homepage, they do say that you can benefit from Maximus, but only when using the GPU plugins like Genarts Sapphire or Kronos from The Foundry, as seen on Adobe TV).

Since we are studio that does much of 3d content creation we were very pleased to learn that soon Chaos Group, the creators of Vray, which is one of our in house renderer, are soon to release new version of Vray, Vray 3.0 which will include render elements, which weren’t there at the time we revised the 4 GTX rig and we considered to be a missing link in a serious production renderer when GPU rendering is to be considered.

Last time, we made several test with the Octane renderer, and we came by not-so-great results (compared to Fermi), but at that time Kepler technology was so new that the guys from the OTOY didn?t have a chance to optimize for the Kepler generation of GPU, that was at the time Octane was still in beta testing. Now, that the Octane is in version 1.2 they have full support of Kepler technology and later on in this article we will see how well it performs.

The other thing is particles and fluid simulations that we do and needed good solution for. Recently new version of fluid simulation software called Realflow 2013 was released and we were happy to learn that it "jumped onto the GPU compute wagon" and does some simulation on GPU technology, not to mention the viewport refreshment was enhanced.

At this point you may already figured out where we are taking this. We configured one new machine with Maximus technology and the other one is with 4 GTX 680s. Just to compare how well this setup works. Two machines, one for content development (running Maximus), and the other one as a independent brute force renderer (4 GTXs). How that worked out? Well, read on, since we discovered something that might be very interesting.
This time we are able to compare two technologies that fall into the same generation (last time we compared Fermi to Kepler). The same scene, was given both machines to render. You wonder what the results were?

This is our test render scene:

And here are the test render times:

This was to be expected, when one considers that in the rig with 4 GTXs we have the total amount of 6000 CUDA cores and in other one "only" 4000 cores. Furthermore, the computational power of the K5000 is 2.1 Teraflops and the Tesla K20 is 3.5 Teraflops, so the results are within the expectations we had. And at this moment we can partially answer your/ours question whether or not Maximus is worth the money. Well, for GPU rendering, definite answer is NOT since the 2 GTXs gave comparable render times to the Maximus. But this way you can have one scene rendered while you work on the next one. If the total sum of performance power is to be considered it comes to 12Tflops against 5.6 teraflops of single precision. So we could expect double the speed in GTX rig, but the power consumption is also reduced in Maximus rig. This is very efficient way to get things done. On the other hand, one interesting number showed up. When we switched from CUDA rendering engine onto a OPEN CL engine within the Vray, we got some interesting numbers. The same scene was rendered three times (3X) faster on a GTX rig, compared to Maximus. What exactly Vray does with Open CL code, which is compiled at render time, we are not sure, but the numbers caught our attention. At this moment we started thinking what would happen if we tested this rig against and AMD 7970-based rig, or a new, upcoming Hawaii series of AMD gaming cards? That remains to be seen.

Now, there is one thing we need to say to be totally honest. We made a longer test (we tried to render the animation of a simple camera move), put more stress on the rigs to see how reliable rigs are, and we are sorry to say that, although GTX machine was faster in computation, it "stressed" out and just froze. For some reason it just stopped, so we needed to reset the machine to get things done. There was no apparent reason, at least none we that could find, for the machine having stopped. After restart, everything just started to work as if nothing had happened. Now, keep in mind that we ran everything with the latest version of Nvidia certified drivers, and that the machine was well cooled (cards were running under 75 degrees Celsius).

Later on we tried to simulate the same situation and after several attempts we weren’t able to repeat that freeze, both with the same and different scenes. What conclusion to draw out of this, we are not sure, maybe this was just some glitch that happened and will not repeat.

We are aware that for many of small/middle sized studios this setup is not an option due to a costs that are involved but this is something we are very happy with and this point considering making several more dual systems( one rig for DCC and one for rendering) so we could extend the access to "render farm" computational power while still being able to work efficiently locally. Harnessing the power of the GPU is more and more coming into everyday use, especially because the software companies are realizing that the GPU is the future of final image delivery.

Did we answer your question about whether or not buying pro cards vs. gaming makes sense? Well, we cannot convey our experience with the pro setup, but rest assured, it makes sense to have a Maximus setup on your main development machine. If for nothing else than for the fact that everything finally works flawlessly. It is great time-saver to have a machine that is able to deliver high performance while you are modeling, editing or simulating fluids and later on ?useful? to join the rendering process, and still be quite and low on energy consumption.

On the other hand if you are not working on a projects that need to be 100 percent reliable, like flying to the moon, you are willing to restart couple of times and are used to CTRL+S shortcuts, gaming rigs will not just outperform pro rigs, but leave you more "dineros" in your wallet for other necessities like faster/more storage, better output devices, external devices like color grading consoles, graphic tablets, etc.