Lähettäjä: W B Hacker Päiväys: Vastaanottaja: exim users Aihe: Re: [exim] RAID level for mailstore
Christopher Meadors wrote: > Marc Perkel wrote:
>> Tom Brown wrote:
>>> Hi
>>>
>>> I wonder what people think for a RAID level for their mail stores?
>>>
>>> I can only have RAID 1 or RAID 5 and i dont have many users ~500 so what
>>> do people think? This will be on a hardware RAID card but the RAID
>>> channel will not be dedicated to the mail store ie there will be another
>>> RAID disk sharing the channel.
>>>
>>> Just want to lay the spindles out 'correctly'
>> Get a couple seagate baracudda drives that are 750g+ and has the 32mb
>> buffer and use raid 1. It's both fast and secure assuming you need speed.
>>
>
> RAID 1 is not fast for writes. It OK to store "write once, read many"
> data on RAID level 1. Things like applications that once installed are
> only launched after that. The problem with RAID 1 is every write has to
> hit every disk in the set before it is considered complete. So your
> array ends up only being as fast as the slowest drive.
Given that 3-drive RAID1 are uncommon, and that most dual-drive are
matched sets, that is not an issue in the 'real world'.
In any case, nothing need 'wait' for the secondary write to complete
even in an alls-oftware RAID1. With a jardware RAID controller, they are
essentially invisible to the OS, even with a wrtie-through cache policy.
>
> RAID 5 is the way to go for all around performance.
Horsepucky. -5 has always been a compromise at best - but more for
capacity & cost than performance.
> It does require a
> minimum of 3 disk and you only end up with the storage equal to one less
> than the set size (vs. 2 minimum and the storage of 1 in RAID 1). RAID
> 5 also gets faster and faster the more drives you add to the set (up to
> the through-put of the bus).
Not so.
> This is because reads and writes are
> distributed across each disk.
You are leaving out the necessity to both compute and store the 'parity'
or error-correction information as well as the data, THEN ALSO update
it, and use it to re-assemble or restore the data in case of failure.
Recovery gets downright tedious if the parity drive is the one that
fails, and distributed parity carries even more overhead. All day and
al night long.
In any case, for RAID5 to even be tolerable, it requires a fast XOR
engine and a dedicated CPU on the controller.
Read 'expensive'. Anything under US$ 300 need not apply, and $1,200 and
up is more realistic.
On the same hardware (media, controller, host) RAID5 write speed can
never match a very risky RAID0, and read speed cannot match RAID1.
The gain used to be ability to make up for small drives. That has less
appeal than when a 980 MB CDC Wren 4 first dropped under the US$3,000
per unit mark (ca 1989 IIRC).
Further, even ten years later, few MB could hold enough RAM to even
cache the directories - so expense was not limited to drives and
controllers. Everything had to be top-end.
> So while one disk would be saturated
> under the write load, in a 3 disk, RAID 5 each disk is only seeing 1/3
> of the traffic (plus a little over-head for parity data--which improves
> with more disks).
No - with 'more disk' of the sizes now used, the 'little overhead'
starts to hit a wall.
A RAID1 with 50% failure is still usable during a rebuild, and the
rebuild easily prioritized - even deferred.
By comparison, what do you suppose happens to a 5 to 7 disk RAID5 with
two drives down? Even ONE is no picnic.
And - lest you forget - the more drives the *higher* the surety of
failure. Any multi-disk RAID array has a lower mtbf than any single
drive of the array.
RAID arrays do not suffer *less* failure than single drives. They fail
MORE OFTEN. Espacially as large arrays, and the drives they have
traditionally been built with, tend to hit case and rack restictions and
run much hotter.
They just fail wth less immediate impact. Usually, anyway.
>
> For all out performance with total disregard for data safety RAID 0 is
> the way to go (it works sort of like RAID 5 with the distribution of
> reads/writes, but no overhead computing or recording the parity data).
> But if any one disk in a RAID 0 fails all data in the entire array is lost.
>
RAID0 should only be used for data that you can safely lose - 'coz - on
average - you WILL lose it. Buffering streaming video, for example,
where the data is already safely stored elsewhere AND can be regained
'fast enough'.
RAID0+1 may be 'acceptable' - though I'm not recommending it. At least
it can be done with cheaper controllers
Finally - you still need to back all this up, and the only thing fast
enough to do that well enough that you can confirm it worked is another
identical array.