Re: [exim] Smarthost + queue worker keep alive the connectio…

Top Page
Delete this message
Reply to this message
Author: Graeme Fowler
Date:  
To: exim users
Subject: Re: [exim] Smarthost + queue worker keep alive the connection
On 18 Jan 2020, at 21:39, Maeldron T. via Exim-users <exim-users@???> wrote:
> Now the only problem left is that I believe the worker processes connect to
> the smarthost for every message one by one.
>
> I didn’t find anything in the documentation (current 4.x version) that
> seemed to allow me to tell the queue workers to send more than one email
> through the smarthost.
>
> This costs me a lot of time and hard cash too especially due to the SSL
> overhead. Did I miss something? Is there a way to make it work?
>
> (To make it clear, I’m talking about the queue workers on the first SMTP
> server and not on the smarthost).


If you're using a fairly standard configuration, Exim will attempt to deliver each message as it arrives. If they arrive from your code one-by-one, it'll send them on-by-one. You should let them queue first and then allow the queue runner to send multiple per connection - usually they'll be batched by destination IP address or domain.

Put your first SMTP server in queue only mode, using 'queue_only = true' in your main configuration. This will prevent immediate delivery.

Run your queue frequently - either via the command line switch, where '-q30s' will for example spawn a queue runner every 30 seconds; or via cron or a systemd timer depending on your init system. The frequency and maximum number of queue runners needed will be something you can experiment with.

Use split_spool_directory to ensure a spread of queue files across a spool tree rather than having them in a single directory.

If your queues are very (excessively) large, consider splitting the emails into named queues rather than the default queue, and run each of those frequently via the method described above.

Bear in mind that (for a given system) the definition of "large" will vary, but basically every queue runner will iterate over the entire queue (in random order, by default), so you're highly likely to end up disk-bound unless you've got a *really* fast disk subsystem. Split spool improves that, and named queues would improve that even more.

HTH

Graeme