Re: [exim] Exim 5.x

Pàgina inicial
Delete this message
Reply to this message
Autor: W B Hacker
Data:  
A: exim users
Assumpte: Re: [exim] Exim 5.x
W B Hacker wrote:
> Phil Pennock wrote:
>> On 2011-05-19 at 12:35 +0100, Dominic Benson wrote:
>>> On 19/05/11 11:50, Phil Pennock wrote:
>>>>
>>>> New queuing system refers to an approach to scale up the spool
>>>> directory
>>>> to something more queue-like, with segregated admin-defined queues (eg,
>>>> "big_freemail_provider_x"). This is because while Exim is excellent at
>>>> inbound mail, it doesn't always scale as well as some would like for
>>>> outbound mail which can't be immediately delivered. Nothing has been
>>>> done on this. Patches welcome.
>>>>
>>> Is this AKA bug 336? It sounds quite interesting, so I think I might
>>> have a look at making some inroads into the problem.
>>>
>>> If there are are any notes/thoughts about behaviour/config/use it would
>>> be handy to hear them.
>>
>> I wasn't aware of 336, but yes it's related.
>>
>> At present, there's split_spool_directory, which divides things up with
>> one level of hashing, and then some people script their own queue-runner
>> launchers, running in parallel over sub-trees of the split spool instead
>> of having the Exim daemon launch runners over everything, which compete
>> with each other.
>>
>> Nothing more specific was discussed, that I either recall or find in the
>> minutes; we all understood the general problem.
>
> Known bound, yes.
>
> Problem?
>
> Not sure.
>
> I stand on the position that WHEN one is loading Exim so heavily that it
> HITS that sort of bound, one has far too many eggs in one basket
> downtime-risk-wise, and should split that load over multiple Exim
> (free)....
>
> ... on multiple separate boxen (generally cheap). Or at least usually
> 'cheapER' than answering complaints from such a huge user base - or
> rushing to a remote data center.
>
>>
>> If we use a new sub-directory of spool_directory to hold named queues,
>> then previous Exim installs won't know of the content, but it should be
>> fairly easy to script a rollback tool which recombines queues into the
>> original queue.


*snip*

..or doesn't need to do so at all...

Here's a use-what-we-have scenario for high load and costly (leased?)
single-box.

Simulate a cluster of virtual servers, but w/o the noise and nuisance of
jails or actual VDM's:

- create a directory tree, preferably on separate devices/mounts, ELSE
SSDD (Disk Drive, not Different Day) - optionally [sub] named per
listener-IP.

- install or clone exim1 thru exim(n), each with its executables and
~/configure renamed to suit, and its own listener IP, master script to
control sequence & time-spacing of startup.

These CAN (with same-group privs) all write to a common mailstore on
yet-another mount to make life easier for POP/IMAP. Or those can ALSO
remain separate - for similar load management reasons.

Each of these umm, 'partitions'..? .. will have its own queue - hints
db, logs (or not) .. everything else.

So long as the BOX and OS have the guts to pull it anyway, the Exim
queue issue - and all else - is compartmentalized to smaller chunks of
the load...

CAVEAT: Separate HDD spindles and head positioners, eg whole drives - de
riguer. ELSE, solid-state aside - r/w load creates the enviable, but
ultimately costly, sound of self-fornicating machinery of the sewing
machine phylum.

I'd even bet this has been done already...

;-)


Bill