Plan B: Intel continuously increasing share in Imagination Technologies
If you followed financial transactions in the past 18 months, you could witness an interesting trend: both Apple and Intel started to increase their share in Imagination Technologies. When it comes to Apple, we are not surprised, given that their bread and butter [iPhone, iPod] use PowerVR graphics technology. But for Intel, a company that is investing almost unimaginable amounts of money in creating an x86 GPU, a continuous increase of share in Imagination Technologies and constant people transfers - just seemed a bit odd to us. Yes, it is true that it may be just that Intel does not want Apple to take over the company or increase the share to get significant voting rights, but again, that can only be speculated upon.
Intel didn't increase its share in VIA before it started an all-out war with them back in the early 2000s and the company definitely isn't buying shares in nVidia. So, why would Intel increase ownership in its future rival if the company is so sure of its success with the Larrabee project?
On one side, one might argue this is to limit ARM's deadly combo of ARM core and PowerVR SGX graphics core, but if you're Intel - where's the Chipzilla confidence? Is it possible that Intel is afraid of the efficiency and performance set by Series5 and we won't even go into PowerVR's Series6, set to debut next year.
We are going to leave you to conclude that one. But just bear in mind that Sandy Bridge [32nm, octa-core], the new microarchitecture that will succeed 45nm tick Nehalem and its 32nm tock Westmere - uses iGFX, a graphics subsystem featuring the same Intel tech [beside already mentioned nVidia patents, GMA relies on Imagination Technologies IP portfolio]. For instance, the Intel Atom platform is consists of an Atom CPU and the 945GMC chipset that features a PowerVR SGX 535 graphics core. Intel's IDF Larrabee demo
During IDF Fall 2009 held few weeks ago, Intel showed Larrabee in working form for the first time in history. During early 2009, we saw Larrabee as a wafer in the hands of Patrick P. Gelsinger, but no working parts. At IDF Fall 2009 in San Francisco, the system was shown running Enemy Territory: Quake Wars on Larrabee silicon. But there was no Pat hosting the tech part of the keynote
There is a big but - that version wasn't the standard Enemy Territory: Quake Wars you can buy in stores, but rather a raytraced version that was demonstrated last year while running on a 16-CPU core Tigerton system. Raytraced ET: Quake Wars is a project by Daniel Pohl [Quake 3 and 4 Raytraced guy] and the team over at Intel Santa Clara. The "baby Larrabee", as Intel engineers love to call it made its baby steps running in lower framerates than a 16-Core Tigerton system from Research@Intel Day 2008, and ran in somewhat a lower resolution. But this was expected with such early silicon.Intel Larrabee prototype board at IDF Fall 2009. Not much has changed from the original PCB scheme
For a chip that promises DirectX 11 and OpenGL compatibility, seeing a CPU demo wasn't exactly a reason to go and jump through the hoops. The demo nicely showed that Larrabee is a chip made out of CPU cores with AVX support that can run CPU code. This demo was not an example how Larrabee silicon runs. The Larrabee IDF demo was all about the software team at Intel making a compiler that makes game code run on CPU and the GPU at the same time – and that's a testament of dedication software team has.
However, there is a mountain to climb, since this is valid for new applications only. Intel knows the company cannot sell the product without the support for OpenGL, OpenCL and DirectX. Building 100 or so millions of lines of code for the driver part is a herculean task, but the sources at hand claim that they are working on target - and warn that the hype was simply brought too soon. Contrary to belief, the IDF 2009 demo was not a major milestone for the Larrabee hardware team. They can't solve some of the hardware issues and those issues will require a massive re-work. Larrabee teams have to deliver silicon that won't sit on 32nm sexa-core Westmere processor and run a single app. The "Larrabee for Graphics" story waits on the software development team, and that is where Larrabee will make  or break [into 2011].
The demo didn't impress the audience - prior to IDF, back at AMD's Evergreen event, we heard comments coming from industry analysts that didn't have a lot of nice words in store for the project LRB. But during nVidia’s GPU Technology Conference [held a week after Intel's demo], it was described as "shameful"
, "they don't have a promised shipping product"
, to "they failed to execute and now AMD and especially nVidia are putting CPU [functionality] into a GPU... that's the only way to go"
. Under the condition of anonymity, we received the following quote from a well respected analyst: "Intel failed to deliver on what they promised. Larrabee is in no shipping state. Intel ventured into the low-ASP area with Atom and now will start to being pressured by ARM on the low end. Intel's biggest mistake was stepping on ARM's turf - they have to fight Samsung, TI, Qualcomm and now nVidia".
But like we already wrote, IDF demo was more demonstration of Intel's brilliant compiler, rather than a Larrabee silicon one. If silicon was the only important part, we could call this a multi-billion dollar fail. Embarrassing parallels: Intel Larrabee vs. Boeing 787
Now, we're not certain exactly what is going on with American companies and multi-billion dollar projects, but there is an unflattering comparison between two large multi-billion projects that were supposed to revolutionize their respective industries: The Boeing 787 and Intel's Larrabee. What we found weird are the similarities between the two:
- Executives hype up the product and claim that the competition is "done"
- Analysts and biased press feed on given hype and praise the company, citing that "the competitors are dead"
- Lower-level employees start dissing the competition on Internet forums
- Executives "launch" the empty shell with engines attached/wafer with dead chips on it
- Internal roadmaps start moving the part deep into the future
- Executives stop talking about the project
- Engineers from different groups start publicly discussing that the thing is going haywire
- Roadmaps are moving even further in the future
- Executives start pretending that the project doesn't exist
- Engineers start to leave the company and talk trash about their own project
- Key execs get the chop
- Demo is staged just for public [ZA001 taking several taxis / Larrabee RT demo]
- Multiple revisions of projects pump up the cost
- The project cost is measured in billions
So far, a happy-end for both projects is yet to be written. One might argue that both cases are a school examples of over-promising and under-delivering, since the 787 was supposed to fly revenue service for the past 18 months [the test plane still hasn't left the ground i.e. a 30+ month delay], and journalists were supposed to review the cards by this time. According to a statement dated some time ago we were in fact, supposed to have Fusion 32nm CPU+ 45nm GPU chips at this point in time. Bear in mind that this was the reason why AMD panicked and paid 5.9 billion [5.4M initial plus additional 550 million for patents and so on] for ATI Technologies, all in order to be just one year late behind Intel's own fusion CPUs.
Personally, I don't agree with "over-promise and under-deliver", because it is an over-simplification of the projects at hand. It is not easy to create a chip that has 10,000 transistors, yet alone in excess of two billion transistors. Given all of the problems that the almighty Intel [we're not being sarcastic here, Intel is truly a behemoth of the world, not just IT industry] experienced with Larrabee, each and every member of the IT industry should think twice before stating that creating a "chip that can do graphics" is something easy. You have to have key people and more importantly, teams that work together for years and have experience… and even that does not warrant success. If you go back to 2005, you remember that ATI had a three month delay on ATI R520 Fudo [Radeon X1K series] because of a single stupid bug in the silicon, or a nine month delay of R600 [Radeon 2900].
However, given the success and reliability of the Airbus A380, the strength nVidia pulled from NV30 fiasco, ATI pulled from R600 - Eric Demers [AMD Fellow and "CTO (gfx)"
] told us "If R600 never happened, we would not see the light and changed our ways".
In the case of nVidia, the NV30 fiasco gave birth to the GeForce 6800, 7800 and PlayStation 3 GPU, ATI is now kicking ass with the Evergreen family and we can say that both Boeing and Intel will deliver sooner or later. Everything else is empty talk.
© 2009 - 2013 Bright Side Of News*, All rights reserved.