Nigel Metheringham wrote:
[...]
> How are things if you give each system its own separate spool directory
> in the same directory tree? If that still gives you poor scaling
> characteristics then you are maxing out the filesystem and theres not a
> lot you can do. [And if that method does give you a speedup then you
> could just run with it - if you lose a box you just fire up a queue
> runner with a config file that changes the spool dir]
It gives me a good performace of about 14 mails/second for each server,
which is almost as fast as good as running on the local filesystem
(which is quite slow, by the way). It is certainly much better than
sharing a spool-directory.
However, currently we are running a heartbeat-daemon and have several
exim-installations on each server, so if exim1 on server1 goes down,
exim1 can be started on another server to take over. I consider that
setup to be quite anoying, since it is lot of work to configure and
maintain. I fear that your suggestion of detecting the failure and
firing up new queue-runners on an alternate server would give us some,
but littel benefit compared to our current solution.
[...]
> You may also want to investigate the facilities for running multiple
> systems off the same spool dir (no I can't remember the right keyword
> for it), but there was a way of splitting stuff based on one of the
> fields within the spool filename.
split_spool_directory?
> What filesystem is it you use?
Currently I'm investigating "PolyServe Matrix Server", www.polyserve.com
--
CU,
Patrick.