Kepler's New Features
Card Physical Specifications
The GTX 680 has to be one of the smallest high-end GPUs that we've seen in quite some time. The GTX 680 is about the same size as an AMD Radeon HD 7870 and significantly shorter than the HD 7970 and GTX 590.
The GTX 680 dual-slot reference board measures 10" in length and includes two dual-link DVIs, one standard size HDMI port and one standard size DisplayPort 1.2. The card is powered by two 6-pin power connectors which can draw up to 225w when you add the two 6-pin connectors and the power coming from the motherboard. This card is only designed with a TDP of only 192w. The card also supports PCI-Express 3.0
Top to Bottom: HD 7970, GTX 590, GTX 680, HD 7870
NVIDIA claims that the GTX 680 only requires a 550 Watt PSU, which is 50w less than what they recommended for the GTX 580 even though the GTX 680 is poised to be much faster. The card also has a thermal threshold of 98C, even though we don't anticipate it getting that high. We will detail exactly how much the card draws under idle and towards the end of our review.GPU Boost
GPU Boost is a feature new to NVIDIA's GPUs. GPU Boost is NVIDIA's version of what has existed in the technology industry for quite some time, but never really in GPUs. Intel has their Turbo boost and AMD has their Turbo Cores. These boosting techniques purely exist when the processor isn't under full load and can afford to clock up some of the cores of the processor beyond their base clock speed. These clocks are checked and modified every 100ms.
Similarly, NVIDIA has applied this principal to their GPU boost which uses available power on the GPU that isn't being used to increase the clock speed. Since not all of the GPU is being used, there is additional thermal and power capacity on the chip and it dials itself upward in a variable manner in order to deliver better performance in applications that don't necessarily fully tax the GPU.
So, the GTX 680 has a whole host of variable clocks that it runs at in order to reduce heat and power consumption. The base clock itself is 1006 MHz with a low power clock of 324 when the GPU is at 10 percent power and in an idle state. As the GPU begins to ramp, it will then hit the base clock of 1006 MHz and if the application so demands it, 1058, the boost clock. We have observed the card hitting 1106 MHz in some cases where the application allows it, as the GPU does not necessarily always limit itself to 1058 MHz as many have been led to believe. One must understand that when referring to the Boost Clock, NVIDIA is referring to the average clock frequency of the GPU as it will vary from millisecond to millisecond.FXAA and TXAA
With NVIDIA's Kepler, NVIDIA has been able to successfully enable FXAA in a single GTX 680 (GK104) whereas it used to take three GTX 580s to run the same settings with MSAA enabled. This is, of course, referring to the story we recently reported on during GDC 2012 where Epic Games reported
to be running their Samaritan Demo on only one Kepler GPU vs three Fermis. FXAA isn't necessarily a new technology as it already exists in some games, but it is being used extremely effectively on a single card with Kepler.
TXAA is NVIDIA's new "film style" AA technique which is designed to exploit the GTX 680s high FP16 texture performance. TXAA is a combination of CG film style AA resolve and hardware AA wound into one. In the case of 2x TXAA there's an additional optional temporal component for better image quality. There will be two modes of TXAA, TXAA 1 and TXAA 2. TXAA 1 will offer visual quality on par with 8x MSAA with a supposed performance hit of 2x MSAA, while TXAA 2 will deliver image quality beyond 8x MSAA but with the performance speeds of 4x MSAA.
Currently, no titles use TXAA technology and as was implemented with FXAA will be showing up in games later this year. Companies like Crytek and Epic Games will be supporting TXAA in their engines and games like Secret World, Borderlands 2 and Eve Online will utilize these technologies.
Adaptive Vsync is NVIDIA's feature which enables the GTX 680 and Kepler architecture to reduce Vertical Sync stutters while also getting rid of screen tearing. Currently, the problem with using Vertical Sync or Vsync is that it either plays at 30FPS or 60FPS as those are the predetermined FPS caps designed to reduce screen tearing. If you remove Vsync, you can run at much higher frame rates, but you run the risk of having screen tearing.
The stuttering within using Vsync is a result of the GPU having to switch between running at 60 FPS down to 30 FPS and whenever it does this, it results in a stutter. NVIDIA's Adaptive Vsync simply avoids this by smoothing out the frame rate drop and actually preventing the card from suddenly dropping down to 30 FPS. What it does is dynamically vary Vsync on and off so that when the frame rate is below 60 FPS it turns it off, and when the frame rate is above 60 it turns it on, effectively limiting it to 60 FPS and preventing tearing, but also preventing stuttering.
Kepler's New Display Capabilities
With the GTX 680 and Kepler, NVIDIA has introduced some new much needed graphical capabilities. First off, NVIDIA now enables up to four displays to be run off of one card. In the past, in order to run more than two displays, you had to have two videocards. In addition to that, you could only run 2 displays per card and those cards could not be in SLI mode or you could run two graphics cards (GTX 580s) and have up to three displays in 3D Vision Surround (3 Displays in 3D mode).
Now, you can run up to 4 displays off of a single card (something that AMD has been able to do since the HD 6000 series and has enabled up to 6). As NVIDIA has been demonstrating, they enable a 3D Vision Surround setup with an additional monitor set atop. Or, you could simply run all four monitors in a span, but that is not very FPS friendly.
Graphics Card Outputs, Left to right: HD 7970, GTX 590, GTX 680, HD 7870
In addition to the 4 display gaming, NVIDIA has also added support for next-generation 4K and 3GHz HDMI displays, we are really excited for this as it gives us an opportunity to test the 4K display capability of the GTX 680 and HD 7970 against each other and see which company really has the 4K market ready to be supported.NVENC
NVENC is NVIDIA's new hardware-level H.264 video encoder. This encoder is different from previous generations because it actually operates on the hardware rather than what has been done in the past. In the past, the 'hardware encoder' was software encoding done on hardware by doing all of the encoding using the shader cores/CUDA cores. Essentially using the compute capability of the CUDA cores to do the encoding through software. NVIDIA claims that this results in an almost four times faster encoder than the previous generation while consuming much less power.
NVENC enables full HD encoding up to 8x faster than real-time and supports H.264 Base, Main and High Profile Level 4.1 encoding. It also support MVC for stereoscopic video, which is an extension of the H.264 codec which is used for Blu-Ray 3D. In addition to that, NVENC enables 4096x4096 encode, which means that it supports 4K encoding.
NVIDIA currently states that the NVENC technology is available through proprietary APIs, and provide an SDK for development using NVENC. Later this year NVIDIA will enable CUDA developers to be able to use the high performance NVENC video encoder. We will be testing this encoder later in our review using Cyberlink's MediaEspresso.
© 2009 - 2013 Bright Side Of News*, All rights reserved.