Writing a review is a tought process and often we, reviewers found ourselves in between a rock and a hard place and finding a place to write in peace is sometimes a great challenge. Given the subject of this story, I decided to write an article 12 kilometers above the ground, between San Francisco and Frankfurt. The original itinerary was supposed to bring me through Washington DC back to Europe, but due to the snow that shut down airports on East Coast, I took the San Diego - San Francisco - Frankfurt route.
The only thing not written in the air will be the performance results. Both Anshel and I worked extensively with the test unit over the course of two weeks as we tried to find as many bugs as possible, to see if this technology is really ready or not. Prior to boarding, we received new drivers from nVidia, as the ones we originally had on the notebook had some issues we’ll elaborate on a bit later. This required us to re-test all of the benchmarks and well, miss the launch on Tuesday. We hope it was worth the wait. Challenges of notebook design today
In order to understand why nVidia went through all the grief of creating the Optimus Technology, it is necessary to get down to the essentials of notebook design. Notebook design is tough, there isn’t any question about it. Packing everything in a small compact format, often smaller than a full-sized PC keyboard with a screen that folds on top carries a lot of complexities and it is very easy to make a cardinal design error which could cost a lot of money and market share. Matt Wuebbling, Senior Product Manager for Notebook at nVidia details the situation on market today
As notebooks growing in popularity and their market share increases, expectations from the consumers are growing day by day, not caring about the actual physical implementation. Naturally, this leads to the market wanting gaming performance and wanting discrete graphics. The problem with discrete graphics is that GPU will consume power regardless of being bogged down to a level of display image only, or when the GPU is cranking at full speed. Both AMD and nVidia made great improvements with their internal power designs, however - even when those designs reached Intel’s level of efficiency, OEM vendors are still forced to buy Intel’s integrated graphics, as was a part of the Centrino platform, and is an integral part of the chipset. Disabled or not, that part will also consume power. Enter the world of Hybrid Graphics.
Launched with Sony VAIO SZ-110B
in April 2006, Hybrid Graphics originally required a reboot - as you can imagine, not exactly a seamless experience and if you’re doing this while running on a battery - the user would lose additional battery life during the boot up process. The second generation was introduced in 2008 with nVidia-powered notebooks such as the ASUS UL50VT or AMD-powered notebooks such as HP Envy 13
. Matt Wuebbling, Senior Product Manager for Notebook graphics demonstrated us how the switching of graphics worked until this date and long story short - it sucks. The second generation ditched the reboot requirement, but we still had a blank screen, if you had an application that ran a DirectX
window, it would not switch regardless of the content etc.
The reason for all the issues was the way how companies implemented discrete graphics. Unfortunately, it isn’t as simple as we imagined. One of the main culprits of combining integrated and discrete graphics is the necessity to use a MUX Switch for each and every connection you have on the motherboard. As you can see on the images, a MUX Switch isn’t exactly easy to implement into the design. Upcoming Arrandale [Core i5] motherboard that utilizes GeForce G210M and has a bunch of MUX switches
On the picture above, you see an image of an upcoming Arrandale motherboard which features conventional approach to discrete graphics - there were 4-5 MUXes on front and at the back of the motherboard, the motherboard has to use around ten layers for operation and overall, and those MUX switches are really expensive, just like the use of PLX Enterprise-grade PCIe switches on dual-GPU graphics cards [you’d be surprised to find out how much those switches cost]. Given that the MUX switch drives both digital and analog signaling, notebook manufacturers usually rely on chips such as Pericom’s PILVD1012, a 10-Channel LVDS MUX chip. Now, add several of them and all those lines on picture below and you can imagine the complexity of connecting all of that to the motherboard. Enter extra PCB layer, and added cost goes straight up. How a MUX Switch Works - about all that additional wiring added to an already complex design
A hardware switch wasn’t the only thing a company needed to do in order to enable Hybrid Graphics – nVidia also had to bundle Intel’s driver inside its own to make it work.
At the end of the day, nobody was happy. The notebooks cost much more than they should have, additional time would be lost in qualification, and users didn’t get the experience they wanted. We now Enter the world of Optimus.
© 2009 - 2013 Bright Side Of News*, All rights reserved.