Re: [Exim] Expensive Queue Flushes & Retry

Kezdőlap
Üzenet törlése
Válasz az üzenetre
Szerző: Philip Hazel
Dátum:  
Címzett: Chris
CC: exim-users
Tárgy: Re: [Exim] Expensive Queue Flushes & Retry
On Thu, 27 Sep 2001, Chris wrote:

> What are your thoughts on having the queue runner do a bit more examination
> of a
> message before forking a process to attempt it? Could the more trivial
> logic that's
> run between fork() and wait*() in any way be implemented in the parent?


It's not that trivial! It has to go through all the addresses and route
them. That is, it must do DNS lookups etc. Exim remembers nothing about
previous routing, so it's not just a matter of looking up data for the
message, or rather, for the addresses.

I suspect it isn't really the fork() that's your problem; it is the work
that Exim does after it has forked. (My understanding is that fork() is
relatively cheap on modern OS, but IANAE.)

> If these great retry rules could be applied less expensively to a large
> queue of
> already problematic messages, Exim would completely rock as a large fallback
> host.


I'm afraid the whole design of Exim is predicated on the belief that it
runs in situations that are precisely the opposite of this; that is,
first-time deliveries succeed most of the time, and retry attempts are
rare. (Which they are in my environment: over 95% of messages are
delivered at the first try.) I'm sorry, it just isn't built for your
situation.

Meanwhile, have you done the other recommended things for large queues?
(a) Use split_spool_directory (b) Ensure that you have an efficient
DNS server with a large cache either on the same host, or nearby on a
fast connection.

-- 
Philip Hazel            University of Cambridge Computing Service,
ph10@???      Cambridge, England. Phone: +44 1223 334714.