Re: [exim] Which hardware do you use for your installation?

トップ ページ
このメッセージを削除
このメッセージに返信
著者: Sander Smeenk
日付:  
To: exim-users
題目: Re: [exim] Which hardware do you use for your installation?
Quoting Jeroen van Aart (kroshka@???):

> > On the incoming end (mx) we now have about ~5 servers with a single AMD
> > Opteron 2.8GHz Dual-Core CPU, 4G ram, and just one SATA-II disk as
> > storage isn't important.
> But why not use a raid1 instead of the one disk?
> [ .. ] you would have to clone/recreate the system and bring it back up.


All of our 'platform' servers use PXE to boot and, depending on in what
group we put the mac-address in the dhcp config, the server
automatically installs itself with cfengine to do whatever function we
need it to do. It partitions the disks, debootstraps Ubuntu, installs
the necessary programs and configures them from our SVN-repository.

This process is runs completely unmanaged. :)

So although i see your point, in our setup it's not necessary.

> Netapp uses NFS I assume? How well does it work with imap? As far as I
> know using imap on anything but unix format mailboxes on a networked
> filesystem like NFS can cause problems due to file locking and such.


Yep, we use NFS with the NetApps. It works like a charm, no locking
issues whatsoever. I'm not entirely sure how our pop3/imap servers do
their locking. It could well be that they keep state on local disk.

I'd have to dive into that if you really want to know ;)

> > have their spool in memory and 'message logging' is turned off (-Mvl
> > doesn't work).
> I remember reading on here that having the spool in memory isn't such a
> good idea, but I forgot the exact reasons.


Well, if there's a lot of mail in the queue for some reason and the
server spontaneously reboots or crashes, you will lose the mail. That's
probably the only argument i can see against having a spool in memory.
But only servers that 'transit' mail have their spools in memory.
Messages shouldn't be on those servers for longer than about a minute.

Our monitoring setup keeps a close eye on the queuesizes and queuetimes
so we get notified immediately if something is wrong.

We -had- to do this trick with tmpfs, Exim couldn't handle the load with
the spool on disk even in raid setup on SATA-II disks.

Thanks for your ideas!

Regards,
-Sndr.
--
| /dev/hda1 has been checked 20 times without being mounted, mount forced
| 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8 9BDB D463 7E41 08CE C94D