NEC FlexPower Server review
It may only enjoy a comparatively low profile, but NEC has always had a presence in the blade server market. With its latest FlexPower Server it ups the ante, as this system is aimed squarely at SMBs – a market where blade servers have traditionally been seen as a luxury.
Co-developed with Intel, the FlexPower Server is presented as a cost-effective alternative for businesses looking to purchase up to three servers over the coming year. It offers solid upgrade and expansion paths, quality fault tolerance and some very interesting storage possibilities. This 6U chassis accepts up to six server blades, or compute modules, and has enough room for 14 hot-swap SFF hard disks, which can be placed into zones and assigned to selected modules.
The compute modules have been designed specifically for this system, and are based on Intel’s PAL5000 platform, so are able to support dual- and quad-core Xeon processors. The price for the review system includes a pair of modules each kitted out with a 2GHz Xeon E5405 processor teamed up with 4GB of FB-DIMM memory and a pair of embedded Gigabit ports, which can be expanded to four with a dual-port mezzanine card.
The storage scenario comprises two drive bays at the front, each offering seven hot-swap slots for low power SAS and SATA SFF hard disks. The bays are routed through to the chassis midplane and link up with a storage controller blade at the rear. This has an embedded LSI SAS chip that supports a range of array types, including dual-drive redundant RAID6. Selected drives are placed into storage pools from which you create multiple virtual volumes, each configured with their own RAID array type.
Virtual drives are assigned to compute modules, which see them as local storage, so you’re effectively creating a SAN within the chassis. This offers good fault tolerance: if one compute module fails you just reassign its virtual drive to another standby module. You can also expand storage capacity, as the controller has an extra SAS port for connecting external drive arrays and adding a second controller brings active/active failover into play.
The module’s network ports are routed through to Ethernet switch blades at the rear, and the review system came with a 10-port Gigabit module. These support only L2 switching, but you can add another switch blade on each module that handles the second network port, so you can implement network failover, too. You get one management I/O module as standard, which has a dedicated network port, and redundancy is supported by adding a second one. The price includes four hot-swap power supplies, while cooling is handled by two hot-plug fan modules at the rear and a third one underneath the drive bays.
NEC has focused closely on ease of deployment and management, and first contact with the Modular Server Control (MSC) web interface indicates this has largely been achieved. The console homepage provides a dashboard showing power status, enclosure, drive and CPU temperatures, system health and detected problems.
Full remote access is provided for the compute modules via KVM over IP, and the switch blades also have their own web interface to monitor network activity, configure ports and create VLANs. Storage pools are easy to create and the resultant virtual drives are assigned to selected compute modules. We had no problems installing an OS, as we used the local optical drive on the system running the MSC as a virtual boot device.
The Transport option is used to move a virtual drive, where you take it offline and assign it to another module. It’s easy to keep track of all storage since the MSC provides a flow-chart style graphic that shows clearly what each storage pool has in it and which compute modules the virtual drives are assigned to.
|Server configuration||6U rack/pedestal enclosure|
|CPU family||Intel Xeon|
|CPU nominal frequency||2.00GHz|
|CPU socket count||6|
|Hard disk configuration||8 (max 14) x 147GB Fujitsu 10k SAS SFF hard disks in hot-swap carriers|
|Total hard disk capacity||1,176|
|RAID module||NEC RAID module|
|RAID levels supported||0, 1, 1E, 10, 5, 6|
|Gigabit LAN ports||2|
|Conventional PCI slots total||0|
|PCI-E x16 slots total||0|
|PCI-E x8 slots total||0|
|PCI-E x4 slots total||0|
|PCI-E x1 slots total||0|
|Power supply rating||1,050W|
Noise and power
|Idle power consumption||270W|
|Peak power consumption||506W|