3D Vision Surround: Excellent experience but…
Tom Petersen, Director of Technical Marketing at nVidia shows Bezel Correction on three Acer 24" 120Hz displays.
The effect is definitely memorable
During CES 2010, nVidia launched 3D Vision Surround Gaming and nVidia Surround, a dual-GPU answer to AMD's Eyefinity technology. nVidia's setup works with GT200-class hardware such as the GeForce GTX 285, 275, 260 etc. The only real requirement is that you have three display outputs and are able to drive the resolution at hand.
When nVidia's execs started to discuss 3D Vision Surround, it was obvious to us that this is a whole another ballgame in terms of rendering. AMD's upcoming six-display board [codename Trillian, but also known as Radeon HD 5870 Eyefinity6 and/or HD 5890] uses a single GPU and 2GB of GDDR5 memory to drive 7680x3200 i.e. majestic 24.5 million pixels. In order to keep the target frame rate in games [AMD set the bar to 40 fps for Eyefinity], Cypress GPU has to process 980 million pixels every second.
In the case of 3-way 3D Vision Surround, three 1920x1080 displays result in 5760x1080 resolution rendered at 120 times each second i.e. "only" 6.22 million pixels. By using a simple calculation, GeForce needs to put 746.49 million pixels every second and they need to be in perfect sync in order to achieve the 3D effect. Also, nVidia activated Bezel Correction which on ATI cards, works only in Linux operating system [at the moment]. In order to calculate 3D and Bezel Correction, nVidia requires that you use two GT200 or GF100-class GPUs and three DVI connectors.
We played Need For Speed: SHIFT on a dual-GF100 based system and honestly, the experience was better than on an equal AMD Eyefinity setup. The problem I experienced on Eyefinity was that left and right screen were a little bit blurred in the speed, and the cockpit was positioned differently when compared to 3D Vision Surround. In the case of 3DVS, every display looked great and after a lap, you weren't playing the game, you were in the game. In any case, it will be interesting to see how these two competing technologies will pan out. We hope that there will be not a single mention of "proprietary", as this is the last thing the gaming industry needs.
There is also another concern I wish to address in this article. When it comes to financial aspect of this gaming experience, we got into a quarrel with Drew Henry about the price. According to information at hand, many gamers are now considering or switching to Full HD-capable LCD TV's for their gaming displays even if they stick to PC platform. The inconvenient truth for PC gaming of today is that if size is something you want, you will simply go for 37", 42", and 46" of similar LCD TV or a Plasma TV. You can get brilliant 46-inch FullHD "3D Ready" Panasonic Viera [ex-Pioneer panel] plasma TV for $850, or just about the price of two 120Hz Acer panels. To add insult to injury, the experience of playing a game on such a screen is even better than 3-display Eyefinity [even with 30" displays] or 3D Vision Surround, as it achieves a similar effect to IMAX movies.
Back in 2005, nVidia and AMD seriously missed the boat with World of Warcraft and in my opinion; they're doing the same with tens of millions of potential buyers. By talking about new investments and large investments in PC hardware rather that offering and addressing reasonable combinations is what limits the companies such as AMD, Intel and nVidia. The "gamer" that these companies have in mind [pays through the nose for all the "latest and greatest"] obviously existed in minds of people who run the show exists in volume equal to the owners of all supercars combined. If nVidia or anybody else wishes to increase the amount of high-end hardware they sell, it has to be bundled with high-end hardware that doesn't necessarily go with the standard perception. It would be excellent for us to write about an ideal $2000 setup with a single Fermi or Cypress board and 3D Ready TV but until the support for 3DTV stops existing in dull pre-CES press releases and actually dwindles down to being a part of a showcase, we don't see this moving forward.
The high-end hardware users that usually approach us purchase a $500 graphics card every 2-3 years, when their investment pays off. Just as those users were first to adopt 24" Dell displays [if you're Dell sales rep, remember what was the percentage of 2405WFP attach to Dell system vs. discrete display buy, i.e. monitor only].
If you ask us what about 3D Vision Surround, the answer would be quite simple. It's a fantastic experience. Once that we calculate setup costs you're looking at $1500 price tag for displays and an additional $1000-1200 for two GF100 boards. Not exactly "fantastic."Real-world Ray Tracing App
According to Jen-Hsun's - his next Ferrari, the beautifully looking 458 Italia rendered using multiple rays.
During the last two years, we heard a lot about Ray tracing and games. As it happened, Ray tracing didn't exactly gain a lot of traction in game development - we do expect that to change in 2011 and 2012, with the arrival of several titles that will put you in control of a movie. As we all know, nVidia owns Mental Images and their Ray Tracing software is second to none in the industry.For some reason, there is no Ray Tracing presentation from nVidia without a Bugatti Veyron.
For GF100, nVidia will release a free Ray tracing demo application, featuring 12 cars and six different scenes. The cars will be rendered to "look like real" in perhaps a few frames a second. According to the demo we saw, GF100 was several times faster in rendering the same object. Seeing a scene rendered at 1-4 frames per second wasn't exactly impressive on a dual-GF100 GPU setup. For comparison, the GTX285 ran the scene at 0.33 frames a second. Even though this was much slower than the Ray Trace renderers in the past, this was a demonstration of cinema-grade quality. We still feel that true reality is a few years off, though.
When it comes to Ray tracing, the interesting bit of information was that nVidia does not use SLI to render the scene but rather use computational power of both GPUs and then just outputting the processed image using computational power. This approach is very similar to the principle used by LucidLogix. During our talks with nVidia, we learned that this non-SLI multi-GPU approach won't be used for Ray tracing alone, rather when it makes sense as it adds PCIe latency to the mix.
© 2009 - 2014 Bright Side Of News*, All rights reserved.