Re: [exim] Limiting the number of pipe processes

Top Pagina
Delete this message
Reply to this message
Auteur: Phil Pennock
Datum:  
Aan: Marko Lalic
CC: exim-users
Onderwerp: Re: [exim] Limiting the number of pipe processes
On 2013-09-04 at 13:37 +0200, Marko Lalic wrote:
> I am wondering if there is a way to set an exact limit to the number
> of processes spawned by piping received messages to other processes.


Yes, provided that you only permit deliveries to happen via
queue-runners. The default is to have short-circuiting, with
immediate delivery attempts, and various constraints available to block
that. `queue_only_load` is one of those constraints.

For an absolute limit, set `queue_only` and set `queue_run_max`. All
deliveries will then happen via queue-runners, which will each process
one message at a time, and `queue_run_max` will control how many
queue-runners you launch.

> Additionally, it would be nice if this setting could be applied on a
> per-domain basis (or router/transport).


If the Python is only to be invoked for messages to be sent remotely,
then you might set `queue_smtp_domains = *` instead of `queue_only`.

> In the scenario I have, each received message needs to be processed by
> a Python script. However, it can happen that a few hundred mails are
> received in the same time causing a large number of processes to be
> spawned. This in turn makes the system go OOM crashing other important
> services. Thus, I would like to be able to specify a maximum number of
> processes which should be allowed.


Someone was working on letting Python be embedded into Exim, much as
Perl can be. That would avoid the fork/exec overhead.

Myself, I would have the Python run as a long-lived daemon which runs in
the Exim group (so can see the queue and read messages) and talk to it
with a simple command/response protocol over a Unix-domain socket. I'd
then use:

    ${readsocket{/socket/name}{SCAN $message_exim_id}...other-opts...}


to read the response from talking to the socket. You could then have
the Python code maintain an appropriately sized thread-pool for the
amount of concurrency you really want, and use a service runner tool
(monit, upstart, whatever) to make sure that the daemon is running.

-Phil