This post will be the "which server should I buy?" thread. I plan on doing this with a handful of central questions so that I can return with comments as time goes on. Maybe if it gets too long I will then create a "2nd which server should I buy" thread. These big question threads will be linked from the right hand navigation column.
I don't really expect anyone to be interested in all this. But if someone is, that is great, and if anyone can contribute anything to the discussion that would be even better.
Here goes: Which server should I buy?
Given that I can pretty specifically say what the server is going to be used for, I think there should be a fairly definite answer to the question. I just don't know what it is yet.
Here's what it will be used for: web serving. Most likely using Apache (although I guess you have to at least give a look at Lighttp with all the attention it's been getting lately.) Almost all requests will be to PHP scripts generating dynamic web pages by pulling data out of a MySQL database. Additionally there will be a rather large +1 TB store of ~5MB binary files that will be served straight from the file system over HTTP to a limited number of simultaneous connections (I don't need this to scale very high.) So that's all very basic web server stuff. [The reasoning behind this architecture and the various possible debates here will be a different post.]
I was initially very attracted to Apple's Xserves because the Mac OS X is what I know best (and what I build things on locally even though they get deployed on linux.) Plus Apple, and the Apple community, seem a little more friendly in my particular situation which is something like: I don't mind learning a little and even mucking around on the command line, but it's really not a goal of mine to be a sysadmin, so if Apple can supply me the whole widget, with a nice clean way to automatically download and install binaries, I can just worry about Apache, PHP and MySQL (what I like to do,) and not so much about, say, getting non standard ethernet drivers to compile under linux, or trying to set up DNS without a GUI. In fact, I don't mind paying a little more for someone else (Apple) to make these things easy for me.
Upon further research, however, it seems there are some serious performance questions (they may not actually be problems, but they are certainly questions right now) concerning MySQL. And maybe even Apache as well. Ouch. That's exactly what I want to do. OS X Server and the G5 chip (IBM's 970) are amazing at a whole host of tasks. Unfortunately it seems like the exact thing I need to do isn't one of them.
So while I haven't made my final decision yet, I feel pretty sure -again given specifically what I want to do - that Linux is the OS you are "supposed" to use. This basically means that the programs I need to run are built and optimized with the linux platform in mind. On the other hand, even if some of the more outrageous claims are true, and MySQL and Apache performance really are an order of magnitude slower on OS X, it might be the case that it is still "good enough". I'm not building ebay here. I think we run on a 700 mhz Pentium right now and I think performance is acceptable. (On the other other hand, I want room to grow.... )
So OS X Server vs. Linux is one debate. And then if Linux wins that debate then there is the secondary "which distribution?" question.
I'll get into specific configurations and pricing in the comments.
We use 2.6 because it has a much better scheduler than 2.4. Doing real-time stuff (those video frames just keep coming, 30 times every second) is demaning on schedulers.
Yeah, but you guys do "real" work. Webserving ain't rocket science. My entire pipe is only going to be a couple of mb/sec. I'm sure you find that amusing Mr. HD. :-) But yeah, I guess that would be a third debate (which kernal to use,) although that decision is probably taken care of by the distribution decision.
I had a pretty interesting day of investigation. I've been focusing on linux since I already know much more about OS X.
First off, wow, the machines are ugly (as if that matters) but definitely cheaper. I guess I knew that already. Part of it is the price you pay for OS X Server vs. the nothing you pay for some distributions of linux (without any guaranteed support of course.)
The distribution of choice for what I want to do seems to be CentOS: CentOS 2 and 3 are a 100% compatible rebuild of the RHEL 2 and 3 versions, in full compliance with RedHat\'s redistribution requirements. CentOS is for people who need an enterprise class OS stability without the cost of certification and support.
As for the server, I have to buy, minimum, a 4U space in the colo I want to be in, so any server that size or smaller is good for me. I figure a 3U is probably better than a 4U since then I can still stick in an addition 1U box if I want to break out any services (DNS? email?)
So I started at Penguin since I know them (although I'm completely out of the loop on this, so I'm going to learn tons more before I make a decision.) And I have to admit I am pretty excited by what I saw.
In the neighborhood of what I was thinking of spending I can get a dual Opteron 3U server with 2 Gigs of ram and eight (!) hot swappable SATA hard drive bays. Damn.
The dream storage configuration I have been thinking about would have two matched drives (not in a RAID) where one is the boot drive (and the rest of the OS and user space) and the second would be a clone of the first drive that gets recloned every night. I don't want RAID 1 because that's not a backup! (A mistyped RM -rf will still wipe out both drives of a RAID 1.) This is what happens on my server now and I think it is a good system. I'm not doing banking or running airtraffic control or anything. My way means that in the worst case situation you might lose 1 day of work, but it would be pretty hard (colo facility burns to the ground) to lose more than 1 day. With RAID 1 you'd have an up to the minute back up, but you open yourself up to the possibility of being really stupid and losing everything!
Then, in addition to the boot and clone drive I'd want a bunch of big disks in a RAID 5. For the machine above I spec'ed it with 2 120 Gig SATA drives for drives one and two, and then 6 300 Gig SATA drives in the RAID 5 yielding roughly 1.4 TB of storage. RAID 5 means that the data is stripped across all the drives for (in most cases, especially with files that aren't really small) increased read performance. Plus the RAID writes special checksum bits to different drives, so at any time 1 of the drives can fail without any lost data. The RAID even continues to run (with decreased performance,) still giving access to all files. Swap in a new drive for the failed one and the RAID will automatically rebuild itself and then return to full performance when done (this can take a long time though.) More than one drive failing at a time and you lose all your data (gulp!) The trade off here is you get one less drive worth of capacity in a RAID 5. So my 6 x 300 drives only yield 5 x 300 capacity. Seems like a nice trade off to me.
The RAID is where my ~5 meg binary file storage will be so they should stripe well in RAID 5 and I don't mind the security of having one drive be able to fail. Complete security for this data is not as important as for the boot and clone drives.
Those drives (well, the boot drive really) will have all user data and the database. The database is really small though since all the binary files are elsewhere (it's just the text which really doesn't take up much room.) I'd be surprised if the database ever grew past 10 Gigs, and it's not even 1 yet. So room here isn't a problem but it's just not any cheaper to buy a drive less than 120!
Ideally I'd have the database on a separate disk, but with this setup (dual opteron! 2 gigs of ram!) it is just not going to matter for the scale of what I am deploying. And if it ever does I can break MySQL out into a separate machine in the remaining 1U and put it on 3 10,000 RPM SCSI drives in another RAID 5. But if it comes to that then things are going way better than I expect and spending another 2K on such a machine shouldn't be a problem.
Anyway, this hypothetical machine would be $5315. And like I said, the processing power is a bit overkill - not that that's a bad thing. I haven't been able to find anything less powerful with that much storage though and I really need the storage.
If I wanted to pinch a few pennies I could ditch the SATA drives for plain old ATA drives. At least for the RAID. This won't ever be serving many simultaneous connections. On the other hand it doesn't save that much, and SATA drives are much better (ATA drives take a hit on the CPU for each transaction.)
Nothing definite here. Just trying to clarify my thoughts by writing them down.
Wow, a lot of thought has gone into that. I'm freaking amazed by the power of servers these days. Our fastest machine is a dual-core quad opteron. Eight 2.6 GHz processors in 3U of rack space, which collectively have over 10 GB/sec access to memory.
We had a raid 5 lose a disk a couple weeks ago on our main MS Exchange server. The system should have kept running on four disks, but it crashed. In the end, everything came back. By itself raid 5 not enough for mission critical data, so your configuration makes sense.
The only potential weakness in the scheme seems to be a disk failure during the cloning of the boot drive. (Could you periodically back up the relatively small boot disk to the big raid farm?)
The dual opteron is an amazing processor configuration. Anything that's data intensive benefits from the AMD memory architecture.
I barely know how to spell Linux, but we have three guys that build Linux kernels in their sleep. If you go that route, I can pass along questions.
Oh there will be questions! It is good to know there is somebody out there.
And yeah, the boot drive is the weak part. If my plan works and I'm actually doing stuff for business customers then I will have to have a secondary strategy as you suggest. And really it should probably be off site. I haven't really got that figured out yet. It's going to be close to me (20 minute walk,) so it's not out of the question that I could just go over there every two or three days and back up the database onto my laptop. That works but it's not very sophisticated. Alternately I could get some cheap space at a virtual host somewhere and just have rsync (incremental snapshots - just sends what has changed since the last rsync) run periodically on cron. Backing it up to the RAID like you suggest makes sense too. The more replication the better.
Quad dual core Opterons??? Are you sure you're not working for the military?
I'm just trying to make the world safe for 500 channels of HD. All the crappy TV you're used to, with more pixels!
Iron Chef... Full Metal Alchemist... Sealab 2021... TCM...
I try to watch as much HD as I can, but I'm only getting in an hour or so a week. Video that's done with a clean HD production chain is simply amazing.
Anandtech has an in depth follow up to their original article that made me doubt OS X for my apache / mysql plans. Confirms my choice to go with linux for my specific plans.
(Sorry for the lack of updates - lots of news coming soon.)
I feel like I have learned *way* too much about server hardware. I admit it is pretty fun though. The danger for me is my tendency to upsell myself. "You're already spending so much, why not just spend a few hundred more to get X." And of course there are an impossible number of different things you can substitute for X. And each one is only "a little bit more"!
Argh. At least I don't seem to be waiting for the next release like I keep doing on my cell phone. But that's a different story... :-)
Anyway, the most recent round of quotes is for the following machine:
2x 64-bit AMD Opteron 244 1.8Ghz CPUs, 1-Meg L2 Cache
Tyan Thnuder K8SD Po (S2882G3NR-D) Dual Opteron Server board
2GB ECC Registered DDR-400 PC-3200, 2x 1G, 6x open
3ware 9500S-12 12-channel SATA hardware RAID-5/10/50 controller
12pcs Seagate Barracuda 7200.8 400-Gig Serial ATA w/NCQ, Hot-swap
Dual Broadcom Gigabit (1000/100/10) NICs + Intel 100/10 3rd NIC
On-board ATI Rage Video, 8M
no provision for CDROM, nor floppy drive (2x USB ports for external peripherals)
SuperMicro SC933T-R760 3U Rack-Mount Case
15x hot-swap SATA carriers & backplane
Supermicro 760w (2+1 380w) Triple Redundant Power Supplies
The drive layout has been slightly complexified (? redundantized?). The plan now is for 3 separate RAID arrays. A 2 drive RAID1 array for the OS. A 2 drive RAID1 array for MySQL. And a 6 drive RAID5 array for mass storage (2 TB). Plus 2 drives as global hot spares that will automatically replace any failed drive from any of the 3 arrays on the fly.
That's a lot of redundancy, but drives are so cheap! Even these 400 GB SATAs. It just seems like it will save so many headaches down the road. Drives will certainly fail. And so you need backups no matter what. But if you can avoid having to reinstall from backup you save yourself a lot of potential headaches. And I don't want any headaches. So RAIDing everything (not RAID0!) and having enough hot spares seems like the safest way.
Still, I'll back up the OS and MySQL RAIDs to the RAID5 (probably every night,) and I'll also backup the database itself (mysqldump) to the staging server at WHQ (that will be Bill's contributed Dell box.)
As soon as the test server is up and running (this will start tomorrow,) I will order the above machine. Although at the rate I am changing my mind about the specs it might be a little different than listed above.
I have a very similar hardware setup...
dual opteron on Tyan 2882 with Dual 9500S-8 cards...
I have had a Million problems, the thing is SLOW and iowaits are enormous. I'm running (now) FC4 2.6.16 and I can't seem to tune the thing to get better performance. In fact, it seems to do worse as I tune it. From Bonnie, I'm getting about 25mbps write and 100mpbs read on the 4 4-disk hardware raid-5's in there.
Have you found any tricks? What OS/distro are you using? Any problems? What I don't get is... these systems seem to sell like hotcakes, so someone out there must be having a pleasant experience! What IRQ config are you using? What order do you have the card(s) in the slots?
thanks,
Chris
chris(at)themolecule(dot)net
I'm running CentOS 4.2 (haven't gone to 4.3 yet) with the standard 2.6.9-22.0.2.ELsmp kernel with a single 3ware 9500S-12 connected to 12 Seagate 400 GB SATA NCQ drives. All filesystems are Ext3.
I have had no problems so far, but I honestly can't tell you what the read / write speeds to the arrays are. I haven't bothered to look because it is working fine for the load I am putting on it. But it might be the case that my workload is much less than yours. When I get some free time (!) I'll look into installing Bonnie and get you some numbers.
Do you have the battery backup unit on the 3ware cards? I don't, but I've heard that this will greatly improve write performance (by allowing you to enable the write back cache.) Might be something to look into. Still, write performance is never going to be too great on a RAID-5. Have you thought about going RAID-50? (Takes more drives though, true.)
I noticed this in the 3ware user guide: With multiple controllers, the controller ID is switched with Win 2k3/64 and
WinXP/32 with the 3ware BIOS vs. 3DM 2 / CLI.
o Dell PowerEdge 2600
o Tyan K8S Pro 2882 with the latest AMI Bios version 2.04 That sounds like only a Windows issue, but maybe there is something else about having multiple controllers on the Tyan board? I don't know. Might be worth pulling one of the cards and running Bonnie with just a single controller.
And finally, I noticed this thread on the LKML.
Probably not, but I hope that helps. Good luck. I'd be curious to hear back if you do find something specific.
|
I don't really expect anyone to be interested in all this. But if someone is, that is great, and if anyone can contribute anything to the discussion that would be even better.
Here goes: Which server should I buy?
Given that I can pretty specifically say what the server is going to be used for, I think there should be a fairly definite answer to the question. I just don't know what it is yet.
Here's what it will be used for: web serving. Most likely using Apache (although I guess you have to at least give a look at Lighttp with all the attention it's been getting lately.) Almost all requests will be to PHP scripts generating dynamic web pages by pulling data out of a MySQL database. Additionally there will be a rather large +1 TB store of ~5MB binary files that will be served straight from the file system over HTTP to a limited number of simultaneous connections (I don't need this to scale very high.) So that's all very basic web server stuff. [The reasoning behind this architecture and the various possible debates here will be a different post.]
I was initially very attracted to Apple's Xserves because the Mac OS X is what I know best (and what I build things on locally even though they get deployed on linux.) Plus Apple, and the Apple community, seem a little more friendly in my particular situation which is something like: I don't mind learning a little and even mucking around on the command line, but it's really not a goal of mine to be a sysadmin, so if Apple can supply me the whole widget, with a nice clean way to automatically download and install binaries, I can just worry about Apache, PHP and MySQL (what I like to do,) and not so much about, say, getting non standard ethernet drivers to compile under linux, or trying to set up DNS without a GUI. In fact, I don't mind paying a little more for someone else (Apple) to make these things easy for me.
Upon further research, however, it seems there are some serious performance questions (they may not actually be problems, but they are certainly questions right now) concerning MySQL. And maybe even Apache as well. Ouch. That's exactly what I want to do. OS X Server and the G5 chip (IBM's 970) are amazing at a whole host of tasks. Unfortunately it seems like the exact thing I need to do isn't one of them.
So while I haven't made my final decision yet, I feel pretty sure -again given specifically what I want to do - that Linux is the OS you are "supposed" to use. This basically means that the programs I need to run are built and optimized with the linux platform in mind. On the other hand, even if some of the more outrageous claims are true, and MySQL and Apache performance really are an order of magnitude slower on OS X, it might be the case that it is still "good enough". I'm not building ebay here. I think we run on a 700 mhz Pentium right now and I think performance is acceptable. (On the other other hand, I want room to grow.... )
So OS X Server vs. Linux is one debate. And then if Linux wins that debate then there is the secondary "which distribution?" question.
I'll get into specific configurations and pricing in the comments.
- jim 6-08-2005 9:33 pm
We use 2.6 because it has a much better scheduler than 2.4. Doing real-time stuff (those video frames just keep coming, 30 times every second) is demaning on schedulers.
- mark 6-08-2005 10:51 pm
Yeah, but you guys do "real" work. Webserving ain't rocket science. My entire pipe is only going to be a couple of mb/sec. I'm sure you find that amusing Mr. HD. :-) But yeah, I guess that would be a third debate (which kernal to use,) although that decision is probably taken care of by the distribution decision.
I had a pretty interesting day of investigation. I've been focusing on linux since I already know much more about OS X.
First off, wow, the machines are ugly (as if that matters) but definitely cheaper. I guess I knew that already. Part of it is the price you pay for OS X Server vs. the nothing you pay for some distributions of linux (without any guaranteed support of course.)
The distribution of choice for what I want to do seems to be CentOS:
As for the server, I have to buy, minimum, a 4U space in the colo I want to be in, so any server that size or smaller is good for me. I figure a 3U is probably better than a 4U since then I can still stick in an addition 1U box if I want to break out any services (DNS? email?)
So I started at Penguin since I know them (although I'm completely out of the loop on this, so I'm going to learn tons more before I make a decision.) And I have to admit I am pretty excited by what I saw.
In the neighborhood of what I was thinking of spending I can get a dual Opteron 3U server with 2 Gigs of ram and eight (!) hot swappable SATA hard drive bays. Damn.
The dream storage configuration I have been thinking about would have two matched drives (not in a RAID) where one is the boot drive (and the rest of the OS and user space) and the second would be a clone of the first drive that gets recloned every night. I don't want RAID 1 because that's not a backup! (A mistyped RM -rf will still wipe out both drives of a RAID 1.) This is what happens on my server now and I think it is a good system. I'm not doing banking or running airtraffic control or anything. My way means that in the worst case situation you might lose 1 day of work, but it would be pretty hard (colo facility burns to the ground) to lose more than 1 day. With RAID 1 you'd have an up to the minute back up, but you open yourself up to the possibility of being really stupid and losing everything!
Then, in addition to the boot and clone drive I'd want a bunch of big disks in a RAID 5. For the machine above I spec'ed it with 2 120 Gig SATA drives for drives one and two, and then 6 300 Gig SATA drives in the RAID 5 yielding roughly 1.4 TB of storage. RAID 5 means that the data is stripped across all the drives for (in most cases, especially with files that aren't really small) increased read performance. Plus the RAID writes special checksum bits to different drives, so at any time 1 of the drives can fail without any lost data. The RAID even continues to run (with decreased performance,) still giving access to all files. Swap in a new drive for the failed one and the RAID will automatically rebuild itself and then return to full performance when done (this can take a long time though.) More than one drive failing at a time and you lose all your data (gulp!) The trade off here is you get one less drive worth of capacity in a RAID 5. So my 6 x 300 drives only yield 5 x 300 capacity. Seems like a nice trade off to me.
The RAID is where my ~5 meg binary file storage will be so they should stripe well in RAID 5 and I don't mind the security of having one drive be able to fail. Complete security for this data is not as important as for the boot and clone drives.
Those drives (well, the boot drive really) will have all user data and the database. The database is really small though since all the binary files are elsewhere (it's just the text which really doesn't take up much room.) I'd be surprised if the database ever grew past 10 Gigs, and it's not even 1 yet. So room here isn't a problem but it's just not any cheaper to buy a drive less than 120!
Ideally I'd have the database on a separate disk, but with this setup (dual opteron! 2 gigs of ram!) it is just not going to matter for the scale of what I am deploying. And if it ever does I can break MySQL out into a separate machine in the remaining 1U and put it on 3 10,000 RPM SCSI drives in another RAID 5. But if it comes to that then things are going way better than I expect and spending another 2K on such a machine shouldn't be a problem.
Anyway, this hypothetical machine would be $5315. And like I said, the processing power is a bit overkill - not that that's a bad thing. I haven't been able to find anything less powerful with that much storage though and I really need the storage.
If I wanted to pinch a few pennies I could ditch the SATA drives for plain old ATA drives. At least for the RAID. This won't ever be serving many simultaneous connections. On the other hand it doesn't save that much, and SATA drives are much better (ATA drives take a hit on the CPU for each transaction.)
Nothing definite here. Just trying to clarify my thoughts by writing them down.
- jim 6-09-2005 5:16 am
Wow, a lot of thought has gone into that. I'm freaking amazed by the power of servers these days. Our fastest machine is a dual-core quad opteron. Eight 2.6 GHz processors in 3U of rack space, which collectively have over 10 GB/sec access to memory.
We had a raid 5 lose a disk a couple weeks ago on our main MS Exchange server. The system should have kept running on four disks, but it crashed. In the end, everything came back. By itself raid 5 not enough for mission critical data, so your configuration makes sense.
The only potential weakness in the scheme seems to be a disk failure during the cloning of the boot drive. (Could you periodically back up the relatively small boot disk to the big raid farm?)
The dual opteron is an amazing processor configuration. Anything that's data intensive benefits from the AMD memory architecture.
I barely know how to spell Linux, but we have three guys that build Linux kernels in their sleep. If you go that route, I can pass along questions.
- mark 6-09-2005 10:34 am
Oh there will be questions! It is good to know there is somebody out there.
And yeah, the boot drive is the weak part. If my plan works and I'm actually doing stuff for business customers then I will have to have a secondary strategy as you suggest. And really it should probably be off site. I haven't really got that figured out yet. It's going to be close to me (20 minute walk,) so it's not out of the question that I could just go over there every two or three days and back up the database onto my laptop. That works but it's not very sophisticated. Alternately I could get some cheap space at a virtual host somewhere and just have rsync (incremental snapshots - just sends what has changed since the last rsync) run periodically on cron. Backing it up to the RAID like you suggest makes sense too. The more replication the better.
Quad dual core Opterons??? Are you sure you're not working for the military?
- jim 6-09-2005 6:00 pm
I'm just trying to make the world safe for 500 channels of HD. All the crappy TV you're used to, with more pixels!
- mark 6-10-2005 12:20 am
Iron Chef... Full Metal Alchemist... Sealab 2021... TCM...
- tom moody 6-10-2005 12:29 am
I try to watch as much HD as I can, but I'm only getting in an hour or so a week. Video that's done with a clean HD production chain is simply amazing.
- mark 6-10-2005 12:46 am
Anandtech has an in depth follow up to their original article that made me doubt OS X for my apache / mysql plans. Confirms my choice to go with linux for my specific plans.
(Sorry for the lack of updates - lots of news coming soon.)
- jim 9-01-2005 8:02 pm
I feel like I have learned *way* too much about server hardware. I admit it is pretty fun though. The danger for me is my tendency to upsell myself. "You're already spending so much, why not just spend a few hundred more to get X." And of course there are an impossible number of different things you can substitute for X. And each one is only "a little bit more"!
Argh. At least I don't seem to be waiting for the next release like I keep doing on my cell phone. But that's a different story... :-)
Anyway, the most recent round of quotes is for the following machine:
2x 64-bit AMD Opteron 244 1.8Ghz CPUs, 1-Meg L2 Cache
Tyan Thnuder K8SD Po (S2882G3NR-D) Dual Opteron Server board
2GB ECC Registered DDR-400 PC-3200, 2x 1G, 6x open
3ware 9500S-12 12-channel SATA hardware RAID-5/10/50 controller
12pcs Seagate Barracuda 7200.8 400-Gig Serial ATA w/NCQ, Hot-swap
Dual Broadcom Gigabit (1000/100/10) NICs + Intel 100/10 3rd NIC
On-board ATI Rage Video, 8M
no provision for CDROM, nor floppy drive (2x USB ports for external peripherals)
SuperMicro SC933T-R760 3U Rack-Mount Case
15x hot-swap SATA carriers & backplane
Supermicro 760w (2+1 380w) Triple Redundant Power Supplies
The drive layout has been slightly complexified (? redundantized?). The plan now is for 3 separate RAID arrays. A 2 drive RAID1 array for the OS. A 2 drive RAID1 array for MySQL. And a 6 drive RAID5 array for mass storage (2 TB). Plus 2 drives as global hot spares that will automatically replace any failed drive from any of the 3 arrays on the fly.
That's a lot of redundancy, but drives are so cheap! Even these 400 GB SATAs. It just seems like it will save so many headaches down the road. Drives will certainly fail. And so you need backups no matter what. But if you can avoid having to reinstall from backup you save yourself a lot of potential headaches. And I don't want any headaches. So RAIDing everything (not RAID0!) and having enough hot spares seems like the safest way.
Still, I'll back up the OS and MySQL RAIDs to the RAID5 (probably every night,) and I'll also backup the database itself (mysqldump) to the staging server at WHQ (that will be Bill's contributed Dell box.)
As soon as the test server is up and running (this will start tomorrow,) I will order the above machine. Although at the rate I am changing my mind about the specs it might be a little different than listed above.
- jim 9-27-2005 8:20 pm
I have a very similar hardware setup...
dual opteron on Tyan 2882 with Dual 9500S-8 cards...
I have had a Million problems, the thing is SLOW and iowaits are enormous. I'm running (now) FC4 2.6.16 and I can't seem to tune the thing to get better performance. In fact, it seems to do worse as I tune it. From Bonnie, I'm getting about 25mbps write and 100mpbs read on the 4 4-disk hardware raid-5's in there.
Have you found any tricks? What OS/distro are you using? Any problems? What I don't get is... these systems seem to sell like hotcakes, so someone out there must be having a pleasant experience! What IRQ config are you using? What order do you have the card(s) in the slots?
thanks,
Chris
chris(at)themolecule(dot)net
- chris (guest) 5-31-2006 3:47 pm
I'm running CentOS 4.2 (haven't gone to 4.3 yet) with the standard 2.6.9-22.0.2.ELsmp kernel with a single 3ware 9500S-12 connected to 12 Seagate 400 GB SATA NCQ drives. All filesystems are Ext3.
That sounds like only a Windows issue, but maybe there is something else about having multiple controllers on the Tyan board? I don't know. Might be worth pulling one of the cards and running Bonnie with just a single controller.I have had no problems so far, but I honestly can't tell you what the read / write speeds to the arrays are. I haven't bothered to look because it is working fine for the load I am putting on it. But it might be the case that my workload is much less than yours. When I get some free time (!) I'll look into installing Bonnie and get you some numbers.
Do you have the battery backup unit on the 3ware cards? I don't, but I've heard that this will greatly improve write performance (by allowing you to enable the write back cache.) Might be something to look into. Still, write performance is never going to be too great on a RAID-5. Have you thought about going RAID-50? (Takes more drives though, true.)
I noticed this in the 3ware user guide:
And finally, I noticed this thread on the LKML.
Probably not, but I hope that helps. Good luck. I'd be curious to hear back if you do find something specific.
- jim 5-31-2006 9:11 pm