Re: [exim] Smarthost + queue worker keep alive the connectio…

Pàgina inicial
Delete this message
Reply to this message
Autor: Maeldron T.
Data:  
A: exim-users
CC: Graeme Fowler
Assumpte: Re: [exim] Smarthost + queue worker keep alive the connection
Thank you for the answer.

I’m using queue_only_load on the smarthost and I used to have queue_only =
true on the internal server.

The -qq isn’t a solution for multiple reasons.

1. When I tried to run the queue worker with the -qq option, it didn’t
start sending the emails for maybe fifteen minutes. There was not much disk
activity by exim (I checked it with top -m io -o total). I have no idea
what it was doing. After a while, I killed it because I had to send the
emails. That was the smarthost which had about 20,000 queued emails that
time. The internal server had about 700,000 queued email, and also did not
start sending them in a reasonable time when I tried it with -qq.

2. I found no way to make exim start as a daemon and run its queue workers
with the -qq option. I had to start them from crontab, and they didn’t seem
to care about the configured limit of the queue workers. As many I started,
as many were running. I can’t afford writing a process manger that keeps
track of the alive exim queue workers. Normally, when there are no hundreds
of thousands of emails to send, then the emails should be sent as soon as
possible, hence I’m running the queue workers in every minute. Without
process management, the -qq queue workers would eat up the memory in maybe
two hours.

3. Based on the manual, the -qq makes exim send the emails on a single
connection that belong to the same host. That might help a bit, I see, but
it wouldn’t always work. At the same time, when all the emails go to the
same smarthost, I don’t see why a host database has to be built in the
first place, becuase all the emails could be (and should be) sent over the
same connection, as it’s always the smarthost. Based on the manual, it’s
not what would happen, but it didn’t even send any email when I tried.

M



On Mon, Jan 20, 2020 at 12:23 PM Graeme Fowler via Exim-users <
exim-users@???> wrote:

> On 18 Jan 2020, at 21:39, Maeldron T. via Exim-users <exim-users@???>
> wrote:
> > Now the only problem left is that I believe the worker processes connect
> to
> > the smarthost for every message one by one.
> >
> > I didn’t find anything in the documentation (current 4.x version) that
> > seemed to allow me to tell the queue workers to send more than one email
> > through the smarthost.
> >
> > This costs me a lot of time and hard cash too especially due to the SSL
> > overhead. Did I miss something? Is there a way to make it work?
> >
> > (To make it clear, I’m talking about the queue workers on the first SMTP
> > server and not on the smarthost).
>
> If you're using a fairly standard configuration, Exim will attempt to
> deliver each message as it arrives. If they arrive from your code
> one-by-one, it'll send them on-by-one. You should let them queue first and
> then allow the queue runner to send multiple per connection - usually
> they'll be batched by destination IP address or domain.
>
> Put your first SMTP server in queue only mode, using 'queue_only = true'
> in your main configuration. This will prevent immediate delivery.
>
> Run your queue frequently - either via the command line switch, where
> '-q30s' will for example spawn a queue runner every 30 seconds; or via cron
> or a systemd timer depending on your init system. The frequency and maximum
> number of queue runners needed will be something you can experiment with.
>
> Use split_spool_directory to ensure a spread of queue files across a spool
> tree rather than having them in a single directory.
>
> If your queues are very (excessively) large, consider splitting the emails
> into named queues rather than the default queue, and run each of those
> frequently via the method described above.
>
> Bear in mind that (for a given system) the definition of "large" will
> vary, but basically every queue runner will iterate over the entire queue
> (in random order, by default), so you're highly likely to end up disk-bound
> unless you've got a *really* fast disk subsystem. Split spool improves
> that, and named queues would improve that even more.
>
> HTH
>
> Graeme
> --
> ## List details at https://lists.exim.org/mailman/listinfo/exim-users
> ## Exim details at http://www.exim.org/
> ## Please use the Wiki with this list - http://wiki.exim.org/
>