Home  |  News  |  Reviews  | About Search :  HardWare.fr 



MiscellaneousStorageGraphics CardsMotherboardsProcessors
Advertise on BeHardware.com
Review index:
16 cores in action: Asus Z9PE-D8 WS and Intel Xeon E5-2687W
by Guillaume Louel
Published on September 26, 2012

While for the general consumer, the introduction of Core i7 Nehalem processors meant above all the division of Intel’s range into two sockets (LGA1366 launched in November 2008, then LGA1156 in September 2009), in terms of the Xeon range it represented a profound reorganisation.

For those who aren’t familiar with it, Intel’s Xeon range comprises processors designed for the enterprise market and for server type machines or workstations. From the manufacturing point of view, many of the Xeons share the same dies as the mass market processors (there's a final extra die designed for configurations beyond 4S with a particular socket), but some of their features can be turned on or off according to the segmentation. One of the features that has been forbidden to the mass market Intel range since the dawn of time, is running two processors at the same time on the same machine, a process often called SMP (Symmetric Multi Processing).

Before going into performance, let’s go back over a few technical particularities of these platforms which play a decisive role in the way they’re capable of exploiting the performance of several processors at the same time.

QuickPath Interconnect

One of the principal particularities of the 1366 platform launch was the introduction of a new communication bus, the QPI. The QPI is a point-to-point interconnect bus designed to provide interconnection with the rest of the machine, generally speaking the chipset. Up until now Intel was using a proprietary bus, the FSB (that required a license for its use both on the processor and the chipset side), the clock of which evolved over time.

In terms of the inspiration for the QPI (both in concept and technical implementation) we have to look at the competition. After dropping the FSB in 1999, AMD launched a new point-to-point bus in 2001, the HyperTransport which served first and foremost to link up the northbridge and southbridge chipsets, on the Nvidia nForce for example. Then with the Athlon 64 it was also used as an interconnect between the processor and the northbridge. With the simultaneous launch of the Opteron platform, AMD used this same bus to connect processors to each other in multiprocessor machines.

Thus just as the Athlon 64s only had a single active HyperTransport link (in theory) against three for the Opterons, Intel included several QPI links (four for Nehalem) in its processors on socket 1366, which can be activated according to usage. For a Core-i7 desktop processor, only one QPI link is active, linking the processor and the southbridge. When all four links are active however, original combinations can be obtained such as, for example, this quad processor platform where each chip is linked to three others as well as to the chipset:


As you can see on this diagram, each processor has its own memory – Nehalem of course saw the introduction of the memory controller, previously used in the northbridge, onto the same die as the processor. This raises the very real questions of how the processors share the data among themselves and how the operating system sees such a system.

<< Previous page
Introduction

Page index
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
Next page >>
MESI, MOESI, MESIF, NUMA  




Copyright © 1997- Hardware.fr SARL. All rights reserved.
Read our privacy guidelines.