Connect with us

News

Why DirectX 11 will save the video card industry and why you don’t care

Archivebot

Published

on

In any new technology, you can see a series of stages. There’s the early development stage in labs and experiments and academic projects, and then an infant stage where things are finally available to the public, an unsteady toddler stage where capabilities are discovered and explored, a semi-stable childhood where lots of internal growth happens but big business has yet to take it seriously, then a sudden catapult into an awkward adolescence that can completely change everything, and finally a settling in to maturity where change is slow and incremental.

Is 3D acceleration hardware is now about as exciting as modern American politics?
3D graphics hardware has hit the maturity plateau. Early on there was a big war of architecture like triangles vs. infinite planes, rasterizers vs. tile rendering, quadratic surfaces, etc. The 2nd generation came in and swept all before it like magic (or Voodoo) and for a while it was unsure who of the old guard would make the 2D to 3D transition and which of the newcomers would make the 3D to 2D transition.

Features matured slowly (32-bit color) but capacity improved quickly (texture pipelines) for a while, and then suddenly this was A Big Deal(TM) when desktop OEMs (at the insistence of their customers following their prophet Carmack) decided every PC needed a 3D accelerator. Suddenly it was a war of T-buffer vs. T&L vs. Tile Rendering vs. Two Chips Rendering Alternate Frames vs. Two Monitor Support.

Mistakes and fortunes were made, lawsuits filed, triumphs celebrated, cheating exposed, allegiances shifted, dreams dashed. In the end the dust cleared leaving a stable two party system with the occasional bit player (Oy!) from a sideline (or 3) to raise just enough hope (or fear) of change to keep things from stagnation.

Nintendo has realized that “good enough” is where the money is
Nintendo actually got it right. Wii all laughed at first but they’re the ones who’ve staged the biggest comeback in gaming history because they figured the market out. They knew that yes HDTVs are awesome and cool, but most households in this generation (2006-2011) will only have one. The kids’ rooms will all get the hand me down CRTs, and those are still standard def. So target them, and win the brothers and sisters and parents with interesting game designs that break the mold, following the successful experiment called the DS.

Ever wonder why it’s not Game Boy DS? Perhaps it was in case the two screen touchscreen and pen concept flopped and it needed to be axed, like the Virtual Boy, but after the experience with the latter they were wary of risking their successful brand on the experiment in case they went back to a more traditional product. But it didn’t flop, so it got tweaked to the DS Color Lite and eventually DS Advance DSi. People criticize the Wii as being “last-generation”, since it does not even pretend to be anything other than the overclocked GameCube Slim with a weird controller – that’s it. But anyone who believes this means Nintendo “has lost it” or “no longer cares about real gamers” needs to take a look at just how many centuries Nintendo has been in business and what their profit/loss record looks like.

All those developers who got to know the ins and outs and tricks of the poor GameCube can still apply those and spend their time messing with the motion sensor instead of debugging their engines. But to say Nintendo has no concept of the use of high powered hardware is to forget the absolutely ridiculously powerful N64 architecture, which made the PS1 and Saturn look like a refurbished workstation from the late 80s and last minute scramble to kludge 3D into off the shelf hardware that they were. The N64 was a freaking SGI supercomputer scaled down to TV resolution with a cartridge slot slapped on.

The downfall was the cost and supply of the cartridges, which Nintendo used to maintain their iron grip on content control. Had they been less arrogant, things might look very different today indeed. But the Wii was most likely not the successor to the GC but the backup plan, and a strategic decision to aim for the “second TV effect” with its pricepoint and urge developers to focus on the innovative controls rather than shiny new hardware features like they usually do.

This was an absolute master stroke and harkens back to the Nintendo that stared down Atari and Coleco and Intellivision even as the industry burned around them and went to bat vs. Universal Studios in court and sent them packing. They haven’t “lost it”, they’re back, and laughing their way to the bank after a start in last place this generation. Don’t assume they’ll be making the same mistakes or are incapable of producing a high-spec machine if it fits their strategy.

What happens to the industry when 3D game performance is as “good enough” as word processors have been?
But enough about the consoles, which continue to grow in dominance pushing PC gaming into single-digit percentages while publishers insult their customers with DRM and mandatory network connections for single player games. The PC market has long had this secret dark side that nobody wants to talk about, called gaming. The big OEMs don’t want to believe that something as unprofessional as gaming is what’s been driving their hardware upgrade demand for so many years, since Moore’s Law took care of that demanding WYSIWYG word processing program sometime around 1995. They need to believe the average cubicle drone, who does the same tasks they have since the mid 80s, has some insatiable demand for performance, or that “innovative” software improvements like transparent window borders or menus that actually spend resources to dynamically hide their overly complex feature creep are somehow improvements that will drive demand. We all watch the dance around this huge elephant in the room, the one named gaming with the game character tattoos on the side and the trunk that’s actually a rocket launcher.

Nobody in their right mind actually believes a dedicated chip with billions of transistors to render 3D graphics is actually useful to the majority of business users. There’s only one reason to have it, and the fact that such things have integrated themselves into video cards and then motherboards and coming soon CPUs only shows what we’ve known for ages is really the truth. Everyone Plays Games On The Computer. Everyone. Or if they don’t, they want to.

But the corporate structure can’t justify that so it’s just been made an integrated feature of every machine with a wink and a nudge. But the fact is gaming has gone mainstream, and thus it can no longer be ignored. It’s no longer surreptitiously installed copies of Doom that the IT guys play after hours or the accidentally installed by default solitaire or the text based fantasy half the computer lab is logged in to instead of doing “research”. It’s big. It’s Billions Per Year big, and it’s not going away, and without it the entire industry would have stopped upgrading years ago.

All the low hanging fruit on the tree is gone, and the free lunch is over
Unfortunately for Microsoft, AMD, and Nvidia, (and fortunately for Intel?) maturity arrived with DX9.0 on Windows XP. Image quality has hit diminishing returns of “WOW factor” and no longer drives people as hard to upgrade. Instead we have to contend with the “WoW factor”, where games with older engines and lower hardware demands are now dominating the gaming experience for the majority, rather than the latest greatest engines being popular enough to drive upgrades like in the past. (See Quake 2 multitexture, Quake 3 T&L, Doom 3 vertex  shader, Oblivion HDR, Crysis… what feature did that one add again?).

Yes the cards are still twice as fast every 6 months and the push for HD resolutions has helped keep things going for a generation, but these days barely anyone but the enthusiasts can tell the difference between a DX9 and DX10 screenshot without a side by side comparison let alone DX10 vs. DX10.1, and even then half of them are faking it to make it sound like they know what they’re talking about. This is at least partly because for a long time Speed really was King in the graphics world, despite snickering at quotable soundbytes. There were decades of research papers in graphics that engine coders could go through and say “Wow I could use this in a game if only the hardware were twice as fast!” and then the hardware would be obligingly four times as fast the next year, allowing even crappy code to do the impossible each generation so long as you bought a new graphics card.

API versions have ceased to be a compelling reason to upgrade hardware
Don’t get me wrong, DX10 was a massive improvement… for programmers. The change in memory model and multitasking ability that made things like Aero possible was a huge improvement… which users couldn’t see. So it’s not that surprising they were reluctant to buy this resource heavy OS with what looked like just a few shiny bells and whistles. Now that XP is fading and Windows 7 is looking to give a real reason to upgrade (or at least fewer penalties for doing so), they’ll move to DX11.

But not because they see better graphics or features, but because it’s getting hard to get the older stuff and nobody wants to be too out of date. Likewise OpenGL… ok OpenGL failed miserably here and lost direction when SGI faltered, letting D3D not only catch up but zoom past in features while the Open Source Community did its usual posturing and bickering, accomplishing much that was moral but little that was useful. Apple’s decision to open up work on OpenCL is the only real reason for excitement in recent times.

People are now talking about DX11 and Windows 7, and while the operating system itself seems to do much to fix the wrongs of its predecessor, it’s still got virtually nothing that XP users see and go “I have to have that!” About the only reason people are looking forward to it is because it’s not broken enough to make them become two generations out of date instead of one. So upgrades there shall be, but this time with a sigh, not a cheer. Long gone are the days when a new DirectX release (which could happen several times on the same OS even without a Service Pack!) prompted a flurry of downloads and new graphics card purchases.

The number of people who can spot the hacks or errors or tricks and care about them is decreasing
Right now parallelism is the main form of expansion, and things are heading towards “a computer for every pixel” as general shader count increases, which is an ideal case for a raytracer. Unfortunately that word has gained a mythical quality synonymous with some unachievable breakthrough in graphics quality, mostly because non-realtime raytrace CG was lightyears ahead of realtime rasterizers used in games for many years, and people have somehow assumed this gap is due to some hard fact of design rather than just brute computing power and compromises in efficiency.

A raytracer is no better or worse than what we have now. Had hardware accelerated that kind of work instead of a traditional scanline-triangle-texturemap system games would not have been magically photorealistic. They would have achieved some things earlier, like Doom3’s volumetric shadows, but other things would have been problematic, like scaling to higher resolutions. A traditional raster system can scale to higher res cheaply and gracefully, but adding surface complexity (texture passes) is expensive. A raytracer can add complexity easily (just a set of lookups) but scaling to higher resolution is hard (more pixels mean more rays mean exponentially more power or time).

Other models for creating an image exist too, like radiosity. The main reason we wound up with what we did is because the ugly hacks weren’t quite as ugly this way. With a “normal” rasterizer you can throw out anything not in front of the camera or anything hidden behind something in view (or fog ;)). With a raytracer, this can be problematic if reflections are involved, but probably hackable. With a radiosity system, that would likely break it completely, things like GTA’s streaming geometry cities would be really hard. But all renderers head to the same place if you give them enough time and memory and compute power, just with different pitfalls. So eventually they all meet up when you have a computer for every pixel (raster + raytrace) or a computer for every textel (those + radiosity) or a computer for every molecule (goodbye texture maps hello graphics unification with physics). So there is still more left to explore, the problem is none of it is going to really matter enough on the screen for most people to care.

Nobody cares if Gears of War is “cheating” with tessellation and displacement maps while Toy Story does it “for real” they just care about entertainment. And the days where graphics were an obstacle for designers to work around are fading fast as just about anything within reason becomes possible and the rest becomes fakeable.

The audience is expanding to include more people who care less about graphics
One serious problem for the industry is that gaming itself is starting to have to reach towards the “casual” market to keep growing (see Nintendo DS or Wii if you’re oblivious), and for those people Quake3 and Sims2 graphics are often “good enough” territory, much like upsampled DVDs. Last I checked Quake 3 ran about 60 FPS on two generations outdated low-end hardware (HD2400/GF8400) at any reasonable resolution (up to 2560×1600).

Where’s the push to upgrade then? Changes in shader models enable new effects but DX9.0c was already Turing-complete, so the removal of restrictions like instruction count are mostly only visible on the development side, not the end user’s experience of image quality. And without visible improvements in image quality, people put away their wallets and go back to playing. The PS2, which was roughly on par with Voodoo graphics’ capability in features (4MB RAM, fixed function pipelines) but had brute force to spare (48GB/s memory, 256-bit bus, 2.3GP/s fill-rate, 16 pipes) is still hanging around and demonstrating complex effects as coders squeeze more and more out of the hardware.

Granted it was shown early on that it couldn’t stand up to the new generation on a HD display, but for those with SD TVs it was still considered a machine worth having by quite a lot of people. And not just because its successor was $400 or its competition was $300, but because good enough was $0 and that meant more money for games instead of another box that might (depending on the whim of Sony to deliver on the PS3’s campaign promises) play the old games almost as well as the one they already have, in addition to new ones that didn’t seem all that much better unless you had a 50″ TV.

Another get out of trouble free card is unlikely to appear
High Definition has been the savior of the PC 3D market for the last few years because screens with a visible improvement in resolution became common just as feature momentum was faltering, and the push for HD in consumer electronics has helped bolster the demand for another bump in hardware power to drive these TVs which finally connect easily to gaming PCs without crazy conversion issues. The sudden awareness of HD content and native resolution of LCD displays managed to drive a flurry of monitor upgrading to watch movies that then had a ripple effect as people realized that most LCDs look crap at resolutions other than native and this meant they’d need faster graphics cards to play their games at the increased resolution.

But it took TV 60 years to get a resolution bump, and it sacrifices one of the key points PC gamers used to cite when turning their noses up at the consoles. So it seems unlikely that people will care about another increase in resolution until the difference once more seems compelling from their view, which will not be just another factor of four improvement. If anything it’s more likely history will repeat itself and a color system change will be upon us come 2014-2015 (48-bit floating point HDR with per-pixel backlight or OLED would be my guess) so the next chance to really define a durable standard won’t be till then. The specter of 3D TV looms once again but as yet nobody has defeated the insurmountable barrier of requiring more effort on the user’s part than possessing working eyeballs, which seems to be about the maximum the average consumer will endure. Special glasses or sitting in an extremely narrow sweet spot are just not in the cards, so until someone slays that dragon it will forever be a boogey man to make investors and stock prices jump and fill pages of Popular Science with some prototype shots for its aging readerbase to drool over as they long for the flying cars promised in their youth.

They’re only buying it because they can’t not buy it
So now that the storm of yawns surrounding the DX10 generation has been weathered in the port of HD fever and finally passed, everyone wants to trumpet DX11 and the return to glorious mandatory upgrading every time the API odometer ticks over.

The positive perception of the upcoming version of Windows has instilled confidence that things will be back to Business As Usual. But nothing could be further from the truth. Just as the last few rounds of upgrades have been driven by the move to push 2Mpixel displays as the standard, so this round is driven not by improvements in 3D graphics but by a sense of obligation to not be on so outdated a platform it begins to cause more problems than it avoids. Nobody is making drooling fanboy articles about the laundry list of DX11 features, and posting screenshots of early build games for people to debate the authenticity of. Because nobody cares anymore. And when DX11.1 and DX12 arrive, they still won’t.

Maybe there will High Dynamic High Definition TVs (HDHDTV or HD^2TV?) to drive new sales, or maybe not. Technically there is little reason a DX10 card can’t do 10.1 or 11 features in software. Actually there’s NO reason, other than efficiency. GPUs are now general purpose, so saying they can’t run newer APIs is like saying a Core 2 Duo from 2006 can’t run Windows 7 from 2009. It can. So can a Pentium 4, a Pentium 3, a Pentium 2… just perhaps not as fast. GPU manufacturers think this change from previous times is not obvious to the consumer, and they are correct. The technical details of things like Turing completeness and theories of computability are beyond most of them. But two screenshots that barely differ except in the price tag underneath are not.

Editor’s note: The views presented in this article represent only the personal opinions of the author. Members of BSN* may or may not agree with the statements expressed in this article.

Original Author: Toby Hudon


Webmaster’s note: You have stumbled on one of the old articles from our archive, for the latest articles we would recommend a click to our tech news category. There you can find the latest technology news and much more. Additionally, we take great pride in our Home Office section, as well as the best VPN one, so be sure to check them out as well.