Intel Nehalem-EX review
With Intel and AMD vying for attention with a flurry of new announcements, Intel’s launch of its “Nehalem-EX” Xeon processors is one the most significant for a number of reasons. The Xeon 6500 and 7500 series completes the move of all Intel’s processors over to the Nehalem architecture and finally replaces the aging 7400 six-core processors. These were the last of the Penryn generation and were looking in serious danger of being outperformed by the lower-end Xeon 5500 and 5600 DP models and even AMD’s six-core 8000 Opterons.
A key feature, no doubt driven mainly by the virtualisation market, is massively increased memory capacities. A pair of 6500s support up to 512GB of 1,066MHz DDR3, while the 7500 in a quad-socket server is capable of handling 1TB. Unlike the 5600 Xeons, which have moved to a 32nm fabrication, both the new series stay at 45nm.
There are two distinct markets for each series, as the 6500 targets business that aren’t prepared to pay a premium for a four-socket server but still want plenty of memory on tap. Three models in this family support dual sockets and offer a choice of four, six or eight cores and up to 18MB of shared Level 3 cache. More choice is available for the 7500, with eight models in this family. As with the 6500, they range from four to eight cores but, at the top end, shared L3 cache goes up to 24MB.
Both Xeon series deliver a raft of new features, with RAS (reliability, availability and serviceability) at the top of Intel’s agenda. For years, the RISC-based market has been untouchable in these areas, but the Xeon 7500 is specifically being touted as a far more cost-effective and equally reliable replacement. RAS centres around memory and subsystem interconnects and comprises 20 features. Note that although these have been implemented by Intel, in many cases it’s up to the various OEMs to decide whether to use them. This approach makes these processors highly flexible as OEMs can weigh up the pros and cons of cost, reliability and performance and decide what features they want to implement based specifically on customer demands.
MCA (machine check architecture) Recovery is at the top of the list and is a capability that has been present on RISC and Itanium systems for generations. It’s present in existing Intel x86 processors, but although it offers the ability in hardware to detect errors, in many cases it can’t do anything about them. MCA Recovery offers remedial actions for basic subsystem failures such as single-bit memory errors, but two-bit memory errors are beyond its remit. It can detect these types of errors and flag them to the system’s firmware. These alerts are then passed on to the OS, where it can decide what to do about them. If possible, it can flag the area as bad and work around it but, if not, it can decide to execute a clean shutdown.
MCA Recovery has a particular focus on virtualisation as the hypervisor could look at where the memory error occurred and see which VM is using it. It can then shut down or restart the affected VM only, rather than bring the whole server down along with all the VMs it’s running.
Memory-mirroring features have also been improved. The processors have four memory controller channels within a bank, with two pairs running in Lockstep. This allows memory to be mirrored across a single bank. Bring in the QPI (quick path interconnect) introduced in the Xeon 5500 and you can now mirror memory across different banks.
Memory options have been simplified as the processors support only dual-channel 1,066MHz RDIMMs. This is because the four memory controller channels in the processor each incorporate a buffer chip that supports four DIMM slots. This allows each processor to support a total of 16 DIMM slots.
Intel’s platform hardening extends further to the QPI, where it runs CRC checksums on all data and provides self-healing. For example, in the event of a physical fault such as a broken solder joint, it may be possible to reroute data around the functioning QPI links. If a processor fails completely, it can be taken offline by disabling its QPI links and replacing it without bringing the server down. The QPI also allows OEMs to expand way beyond the base four sockets. Using a “glue-less” design, they can built eight-socket systems with onboard QPI links without having to design a custom chipset.
In an eight-way server, the maximum supported memory goes up to 2TB. For even higher processor counts, manufacturers can build node controllers allowing servers to be expanded to 256 sockets and possibly beyond.