Thursday, August 27, 2009

Building a High-end Windows 2008 Database Server - Page 3

The Components (continued)


I require a rackmount case that includes rails and can accommodate the large number of drives I need.  It is possible to get a case that will hold 24 x 2.5” drives in only 2U, and I initially picked such a case. Unfortunately, the only acceptable 2.5” drives that fit my budget were unavailable from any source I could find.  Everything else was at least double the price of the best 3.5” drives that had twice the capacity.  I ended up with a 4U Super Micro case that can hold 24 x 3.5” drives.  There are very few cases like this on the market.   (See pictures at the end of this document.)

This specific case puts the power and reset buttons along with the six indicator lights on the handle, since there is no room on the front for anything but the drive cages.  There is also a slot on the back for a half-height CD/DVD drive.  It is possible to use this case as a do-it-yourself external drive chassis by adding an optional chassis power card in place of the motherboard.  I actually considered that when looking into clustering.

Internally, there are two redundant hot swappable power supplies, five hot swappable fans, and an adjustable air shroud for concentrating air flow to the CPUs.  At the datacenter, each server gets plugged into two separate power strips that are on a very redundant power grid (two power providers + batteries + two huge diesel generators with multiple fuel suppliers). 

A big advantage of the larger case is that I can use full-size expansion cards and it is easy to route cables in a way that maximizes air flow within the case.  On the downside, this case is extremely heavy, requiring big rack rails that are too long to fit in common-sized enclosed racks.  You will actually need to have at least 31" of clearance from the front post to the back door.  My rack has only 29", which meant that I had to use a heavy-duty sliding shelf instead of the included rails (very aggravating).


The CPU needs to be selected before I can narrow down the motherboard possibilities.  Not only do I need to consider different CPU families, but also speed within the families.  I usually work my way up the speed ladder until I see a large jump in the price.  Instead of paying too much for speed, it makes more sense to go with more CPU cores and more CPUs.  SQL Server is very good at taking advantage of the extra “processors.”

It wasn’t until Intel came out with the CORE-2 Duo that I felt they were making fantastic CPUs (I’ve used everything back to the 8086).  The Xeon processors based on this and the new i7 core are a bargain compared to everything else that is out there.  I ended up selecting the 80W version of the E5430, which has four cores, a 12 MB L2 cache, and runs at 2.66 GHz.  If I had selected the 3 GHz model (E5450), it would have doubled the price, though none of my customers would ever notice that marginal speed difference.  When I see such an irrational price jump, I know I have gone far enough up the price/performance ladder.

The biggest downside to the E5400 series Xeons is that the front side bus is 1333 MHz instead of 1600.  This means that I won’t benefit from 800 MHz memory since the memory bus runs at 667 MHz. 


The Choices
I have used many different vendors for motherboards but during the last 10 years, most of my servers have been built on Super Micro and most of my desktops have been built on ASUS. 

For this server, I needed something that supports two of the CPUs I selected and although I would be adding separate SAS controllers, I still wanted SAS support on the motherboard.  I also wanted an onboard graphics controller and as many x16 and x8 PCIe expansions slots as possible to handle my SAS controllers and future needs.  Finally, I wanted lots of memory slots and a high maximum memory size since I know I will be adding more memory in two years.  Other considerations are Gb Ethernet ports, bus speeds, memory pipelining, and the latest support chips for the E5400 Series CPU.

I was surprised to find that Super Micro didn’t have a board to fit my needs.  There were compromises I could have made to still produce a successful server, but ASUS had what I wanted.

The Decision
The ASUS DSEB-D16 supports two Xeon processors, a maximum 1600 MHz front side bus, two second-generation X16 PCIe slots, and one X8 slot.  There are also six SATA ports and eight SAS ports, which is plenty if I were using this for a lower-end server.  I could have saved more than a couple thousand dollars on the final price tag by skipping the SAS controllers, using only four drives for data, moving OS, TempDB and Backup to SATA and using the onboard SATA and SAS ports.  However, the drives are by far the biggest bottleneck in a database server, so I didn’t consider this option.  It’s nice to know that these ports are there if I have a future need.

Very important on this motherboard are the 16 memory slots on four independent channels for a maximum of 128 GB of RAM.  It even supports memory redundancy (mirroring) and hot spares. 

Another useful feature is a set of four gigabyte Ethernet ports powered by two Intel controllers that can be teamed together in many configurations to support data load balancing and failover.


Since my motherboard uses four memory channels, I get the best performance by buying memory in sets of four DIMMs.  I also need to make sure I provide sufficient memory for the number of CPU cores I have in order to maximize their speed.  I like to start with 2 GB per core.  Also, it is a requirement that I use fully buffered DIMMs (FB-DIMM) with error correction (ECC).  Another consideration is that the individual DIMMs must be sufficiently capacious so that I have the maximum number of slots left for future expansion.  SQL Server makes excellent use of extra RAM for caching both data and execution plans.

With eight cores, I wanted to start with 16 GB, so I ended up selecting four 4 GB DIMMs.  They are 667 MHz DDR2 because of the CPU and motherboard I selected.  When selecting which brand/model to choose, I look at quality, latency timings, and price.

SAS Controller

The best controllers will minimize the participation of the CPU for data access while maximizing the throughput of the attached drives.  These controllers all have more RAID variations than I need since I am using what is now considered the basics.  The PCIe controllers mostly come with either an x4 or x8 card bus.  If selecting a card with more than eight data lanes (i.e. more than two mini SAS or SFF-8087 connectors), then x8 is a requirement to prevent a bottleneck.

I initially need to support 15 drives but I could have as many as 24 in the future.  I also prefer to spread the drives across a minimum of two controllers, and the controllers must be identical.

I ended up with two x8 cards with four mini SAS connectors on each for direct support of 32 drives.  That may sound wasteful at first, but each card can support my initial needs by itself, which gives me a fallback if one of the cards dies.  It is important to note that there is no standard for how to actually implement each RAID level, which means that a card from a different manufacturer, or even a different model from the same manufacturer, probably won’t be able to read data written by a different card.  If you don’t have a spare card lying around or at least spare capacity on existing cards, you can end up with a serious amount of downtime.

A valid alternative would be to get three cards with two connectors each to support a maximum 24 drives.  However, this actually costs more and I would need a fourth card as a spare once I maxed out my drive capacity.

I try to avoid having spare controllers around, as they are very expensive and become obsolete quickly.  I have had to discard too many unopened boxes.

Power Supply, Video Card, DVD, Floppy Drive

I am not installing a floppy drive.  For the few instances where I need one, I use a USB floppy drive.  Many functions that once required a floppy can now be done using a thumb drive.

For one of the servers, I pulled the DVD reader out of an old laptop and left the other one bare.  I temporarily connected a spare DVD reader during the OS installation.  The balance of my installs were done over the network.  I don’t spend too much time on selecting a DVD reader for servers since they don’t get used much.

With workstations, I put a lot of effort into the selection of the video card.  But with servers, I prefer that the video is integrated with the motherboard.  The reason is because I rarely access the server directly after I finish the initial installations, and I want to leave the slot open for future expansion.  Nearly all management is done remotely.

Power supplies are a critical component and there is much I could say on this topic.  But I won’t do that here because the case I selected came with two 900W, redundant, hot swappable power supplies.  The power supplies slide out the back if you need to replace them while the server is running.


Before I outline the major configuration steps, I am sure you are ready to see the actual parts list and prices.  Keep in mind that Dell can’t touch the performance of this system at twice the price, and that prices for computer components tend to go down over time.  Don’t be surprised if some of the parts are no longer available by the time you read this.  I did not include prices for any software since different licensing schemes result in wildly different pricing.  Don’t stop reading here, because some of the upcoming configuration steps are critical for maximizing a server’s performance.

Price (ea)
4U Rackmount Server Case
900W (1 + 1) Redundant
Dual LGA 771
Intel 5400 SSI EEB 3.61
Server Motherboard
Xeon E5430 Harpertown
2.66 GHz 12MB L2 Cache
LGA 771 80W Quad-Core (BX80574E5430P)
Kingston 8 GB (2 x 4 GB)
Fully Buffered 667 (PC2 5300)
Dual Channel Kit
Server Memory
2 Kits
Hard Drive
Fujitsu MBA3147RC 147 GB
15000 RPM 16 MB Cache SAS
SAS Controller
Adaptec 2252700-R PCIe x8
 Grand Total

Next Page: Hardware Setup

Please go to the first page to read or post comments.