Re: [exim] Separate spool directories

Inizio della pagina
Delete this message
Reply to this message
Autore: Tore Anderson
Data:  
To: exim-users
Oggetto: Re: [exim] Separate spool directories
* Philip Hazel

> I don't think it worth putting an item that is in effect "completely
> re-design the way Exim works" on the wish list. :-)


Aww. That's a shame. :-)

> I'm sorry, but it just isn't designed that way. The environment I
> wrote Exim for is one in which well over 90% of messages are
> delivered right away, and consequently, the queue is short.


Yep. Which is probably why it grinds to a halt with longer queues.

> I never foresaw that people would be using Exim to handle millions of
> messages a day.


I wonder how much mail all Exim installation around the globe shuffle
in one day in total... Billions, surely. :)

* Tore Anderson

> For example, spool_directory could just be expanded. Then I
> could just configure it thusly:
>
> .ifdef SPOOL_DIRECTORY
> spool_directory = SPOOL_DIRECTORY
> .else
> spool_directory = ${if def:acl_m0 {$acl_m0}{/var/spool/exim}}
> .endif


* Philip Hazel

> That is an interesting idea, but it would involve deciding exactly
> when the expansion should take place.


Right before writing the message out to the spool, I'd guess? Right
before the DATA ACL, in other words. Or do you need access to spool
directory before that?

> Also, it wouldn't work because of a chicken-and-egg problem. The
> variable $acl_m0 is stored, for each message, .... wait for it ....
> *on the spool* :-( So when an
> instance of Exim is started to deliver message X, it has to read X
> from the spool before it can find the spool. Hmm. See what I mean
> about redesign?


Well, I generally assume that you'd have to have separate queue
runners for each spool, for instance using a macro as above. So for
that situation, there's no problem. I have a few low-troughput
installations using that trick for other purposes, and it works just
fine. However, the possible problem I see is only with immediate
deliveries, because as you point out, the immediate delivery process
doesn't neccesarily know where to find the message the daemon told it
about with -Mc.

> But how would you choose a queue? Remember that a message may have
> multiple recipients. Or would you restrict such messages to a single
> recipient?


My problem, not yours! :-) And the expansion language will enable me
to solve it to my satisfaction without needing you to have foreseen
exactly what I want my system to do.

But since you ask, I'll prove some examples:

For the MMS clusters I spoke of before, I'd go for the (last)
recipient and that's that. MMSes that are sent to several recipients
are few and far between, and it wouldn't matter much if there was one
message out of 500000 that wasn't destined solely for the peering
partner the queue was intended for. The intention is of course to keep
those 500000 messages out of the way for messages that are destined for
other peers that are alive and well, which just using the last
recipient, by just setting the spool_directory explicitly on every run
of the RCPT ACL not caring if it was set before, would accomplish just
fine.

Another example: Right now I'm the frusterated admin of a system that
are literally raped by Yahoo Groups. With a bulldozer. My oh my how
I'd wish I could just dump all messages from 66.94.237.0/24 in another
queue with queue_only, and be able to offer decent low-latency service
to all the other users. Just having been able to place that spool on
another filesystem entirely would've been wonderful; right now the
filesystem the spool are on fills up faster than I shuffle MXes in and
out of the cluster. Having to defer the YG messages after DATA because
of no available space in the spool would be acceptable, that it affects
every single user is a true PITA.

* Tore Anderson

> And of course I'd need separate queue runners started with
> -DSPOOL_DIRECTORY=/whatever. Not sure how it'd affect immediate
> deliveries though..


* Philip Hazel

> Immediate deliveries are no different from any other deliveries,
> except that they start as soon as a message arrives.


..and that they do not receive info about $acl_m0 from the daemon
process, hence are unable to expand spool_directory the same way the
daemon process did, resulting in (with my expansion, at least) the use
of another spool directory - and ultimately failure.

* Tore Anderson

> How about it? Is it doable without too much hassle?


* Philip Hazel

> I doubt it, but I have been proved wrong in the past when I've said
> that. :-)


I sincerely hope you are again. :-)

I'll try with another suggestion. Let's say:

    spool_directory = /var/spool/exim/default


    begin acl
    check_rcpt:
      [...]
      warn condition = ${if eq{$domain}{mms.vodafone.co.uk}  {yes}{no}}
           control = override_spool_directory=/var/spool/exim/vodafone


Now, this override_spool_directory (poor name, just to distinguish it
from the main configuration setting) would have to follow the message,
like control = queue_only would have done. Also, you'd have to add a
new parameter enabling you to override the spool_directory main
setting. I'll call it "-SD".

So, when you get to the start of DATA, you see if
override_spool_directory was set, and if it was, you write the message
to that directory instead of the default one set in the main
configuration section. Furthermore, if you decide to do immediate
delivery on this message, and there's an override_spool_directory
present, you just add -SD/var/spool/exim/vodafone to the delivery
subprocess' command line.

I'd also need a dedicated queue runner process for this spool
directory, of course. Similarily, this would just use the -SD
parameter, for instance "exim -q10s -SD/var/spool/exim/vodafone".
You'd also need the -SD paramter for other commands such as -bp,
-Mvl, -R, et cetera. Unless you're just using the default set in the
main configuration, making the whole thing backwards compatible.

You still think it's impossible? Why? (Surely I must've thought
of everything!) :-)

> The usual workaround is to shunt off messages that cannot be
> delivered quickly onto a "backup queueing host".


Yeah. Expensive though, and IMHO it really shouldn't be necessary.

--
Tore Anderson