Sandy Bridge: TurboIntel have abandonned the term un-core, instead talking about the System Agent for Sandy Bridge. The System Agent contains all the exterior links (DMI, PCI Express, memory), a connection to the ring bus and the Power Control Unit which handles energy management. The System Agent has separate voltage and frequency from the ring/cores and is in constant activity.
It pilots the Turbo feature, the third generation of which is used in Sandy Bridge, and includes two important developments. The first is obviously to make this technology available to the graphics in the same way as the CPU cores. Depending on load, Turbo will be able to prioritise one or several CPU cores or the graphics core.
The second development is more complex. The new PCU allows Sandy Bridge to exceed its TDP and for this, Intel has introduced the concept of thermal capacitance. Simply speaking, TDP is fixed so as to avoid overheating of the CPU in the system. However, the temperature increase doesn’t come into effect straight away after an idle period, a certain time is required before the limits are reached. Intel is now using this small lapse of time to allow Sandy Bridge to exceed its TDP for a few seconds given that the CPU won’t overheat immediately. In practice, after some time in idle, Sandy Bridge can then sustain a brief high power demand before stabiliziing and dropping back within TDP limits.
This doesn’t give any advantage during continuous, heavy tasks, but you can imagine, for example, in Photoshop, which places intermittent demands on the CPU, there could be a considerable advantage to be had by these short additional boosts. This innovation is clearly in line with Intel’s HUGI (Hurry UP and Get Idle) concept. It could also be a new parameter for overclockers to play with.
The very charismatic Mooly Eden, General Manager of Intel’s PC Client Group, telling us about how the new Turbo feature works.
Sandy Bridge: HD Graphics
With Sandy Bridge, Intel has proudly announced that it has exceeded objectives in terms of the increase in performance of integrated graphics. Sandy Bridge will double the performance of the integrated graphics core, which will now be on a par with current entry level AMD and NVIDIA GPUs. Intel hasn’t increased the number of processing units in the IGP – there are still 12 as with the Core i3/i5s. They benefit from integration onto the CPU die and 32nm engraving to up the clock but also Turbo and the last level cache that the graphics driver can decide to use or not depending on the data streams so as not to waste it where it brings no tangible gain. Intel says for example that it will be used for textures read and writes but not for vertex streams.
Other improvements have been put into place to maximise processing unit yield, such as increasing the number of registers, revised support of complex operations, higher performance branching, native support of more instructions and the introduction of fixed units wherever possible. Intel is targetting efficiency here and there’s nothing like high performance and economical fixed functions to give this. There’s no DirectX 11 support. Although it is possible to add tessellation to the software geometry pipe, Intel told us that they can’t get around certain issues such as support for new texture formats. This graphics core does however include antialiasing and support for DirectX 10.1. It also supports OpenGL 3.1 and, even better, OpenCL. DirectCompute version 4.1 is also on the menu.
Finally, there are more fixed function units for video to avoid having to resort to the main execution units, which draw more power. Intel is saying that energy consumption during HD video playback will be halved in comparison to the current generation. The new HD Graphics therefore seems very well armed in terms of video, both decoding and encoding, with Intel’s demonstrations proving impressive, of the same order as AMD and NVIDIA GPUs. This built-in graphics core should therefore be able to replace a Radeon HD 5400 or GeForce 210 no problem and relegate numerous entry level mobile GPUs to history, with even NVIDIA’s Optimus technology likely to be in deficit to it in terms of energy efficiency.