Larrabee and 3DFormerly a secret, it was finally and clearly confirmed by Paul Otellini, Intelís CEO, that yes the Larrabee will be intended for graphic rendering.
You may recall, the Larrabee is a massively parallel processor whose purpose is to fill a huge gap in Intelís current line as it tries to tackle massively parallel problems. This is similar to what Nvidia is doing with CUDA and what AMD announces it will eventually do. Itís a large future market and therefore of interest to Intel, who doesnít want to leave a significant portion to the competition.
Larrabee is a processor which will be composed of 16 or 24 cores in its initial version. These will be simplified IA cores (x86), in other words, in-order and stripped of a large part of the logic part in order to concentrate on execution. Thus, the vector calculation unit will be 512 bits wide (not to be confused with a precision of 512 bits) and will allow, for example, to process 16 FP32 operations in a single action.
Next to this cluster of cores, there will be a large overall shared cache as well as a block which will contain special functions and the memory controller. The Larrabee should be capable of communicating with the rest of the system either in QuickPath or PCI Express, while supporting Geneseo, a group of extensions for this data bus.
The GPU market edging closer to that of calculation power, itís only logical for Intel to become interested and this will be the case for the Larrabee. Given that it is based on IA cores, which have SSE++ destined to extend its instruction set to new tasks that will be assigned to it, Larrabee will not perfectly adhere to Direct 3D and OpenGL graphic APIs. Intel should therefore convert the functions of these APIs into ones that can be interpreted by the Larrabee. Note that Nvidia and AMD also have to do the same for their GPUs, although they were conceived with this in mind and as close as possible to these APIs.
Intelís work then will not be really that different but rather more complex. Or at least, if we find the ę backbone Ľ of a GPU amongst its special functions because if it also has to be emulated, things could become complicated.
The Polaris wafer, the 80 core CPU used in research from which the Larrabee is derived.
But if Intel really wants to break into the graphic market, why not go directly towards producing a GPU? There are several reasons for this. There is probably a lack of expertise which would make a frontal attack more difficult but also the possibility of killing two birds with one stone by attacking the graphic and calculator but also the ray tracing. Intel really believes in the viability of this mode of rendering for applications rendered in real time, including games.
While it does indeed offer a number of advantages, itís still very heavy and the demonstrations up until now, although technically impressive, are still very basic on the visual level. If we believe Intel, the Larrabee could change this and finally popularize ray tracing. Moreover, a dedicated API will be offered in addition to Direct 3D and OpenGL. In our opinion, we think itís still too early for ray tracing to be of interest and promises of games rendered in real time and ray tracing with the Larrabee seem a little dubious.
Justin Rattner and Daniel Pohl, admiring a ray tracing demo that we have seen over and over for several months now. Of course it goes faster, but could we get a new one for the next IDF ?
This is all the more true because to develop a video game, it takes several years. When the Larrabee is released, the engines capable of supporting ray tracing will not magically appear. Time will be needed for the change to take effect. Instead, we expect a Larrabee in 18 to 24 months with the same performances as todayís GeForce 8800. This will then allow the ability to start serious development in ray tracing and something that will fuel its successors. In short, it will more be a superb platform for development than a product which will be revolutionary for gamers. Of course, Intel could always surprise us.
In the meantime, Intel intends on significantly improving the performances of its integrated cores. First off, there is the G35 which will be a little more efficient and support DirectX 10. By the way, Intel mentioned in its presentations that although the G965 will not support DirectX 10, the GM965 will indeed support this API, via a driver planned for the beginning of 2008.
Finally, the most significant increase in performances should come with the Nehalem, because the GPU will be integrated to the CPU with what we can hope to be a better access to memory and a boosted frequency.
These successive evolutions should result in a ten-fold increase in integrated graphic performances from now until 2010 according to its creator. For our part, we are waiting and hoping that Intelís software development teams are refined. Up until now, we havenít seen anything too promising on the driver level, and in addition, they donít support what they are supposed to until much later. In short, there is some improvement to be made in this area.