DDR3: Impact of channels & timings
by Guillaume Louel
Published on February 22, 2011
Since the advent of DDR3, the question of the impact of memory on overall machine performance seems to have been pushed into the background somewhat. While all discussion on DDR2 was focussed around latency, moving across to DDR3 moved the goalposts.
This is partly because of certain decisions taken by JEDEC for the official DDR3 specs. The accent was put on reducing energy consumption and increasing bandwidth.
In the meanwhile, memory controllers have adapted to these changes, most importantly with the integration of the memory controller within the processor (historically it was built in the northbridge). AMD adopted an on-chip memory controller in 2003 with its Athlon 64 (with DDR memory at the time). Intel brought this in later with the introduction of its Core i CPUs (socket 1366 and 1156). Processor memory cache has also increased in size and level 3 cache has been rolled out across the board to better hide latency. The impact of this went as far as the pipelines because pre-empting memory operations as soon as possible has become a sine qua non for architecture engineers.
With all this effort going into the mitigation of the impact of latency on memory, is it really now the case that the only question that matters in any discussion on memory is bandwidth? After all, the third memory channel on Core i7s is often described as giving little improvement in performance.
We’re going to try and take a closer look at these issues to get a clearer picture of the current situation with respect to memory and the various platforms in use: AMD’s socket AM3 and Intel’s sockets 1155 (Sandy Bridge), 1156 (Lynnfield/Clarkdale) and 1366 (Nehalem).