Auteur: Dave C. Date: À: exim-users CC: Philip Hazel Sujet: [Exim] Thoughts on load..
I am using exim 3.34, and this may have already been done in 4.0, but if
not, I think it would enhance exim performance greatly
When trying to dequeue a very large number of messages, I have found a
number of things.
One, If I do an exim -q, it does one message at a time. It doesnt
overload the machine, but it takes forever, not taking any advantage of
parallelleism. Since a LOT of the time is spent waiting on DNS servers
to answer or remote SMTP servers to accept and process connections, this
is incredibly ineffecient..
If I run several queue runner processes, it goes slightly faster, but
the incremental gain seems to be lost due to them stumbling over each
other ( eg, "another processs is handling this message")
I use split_spool, so have thought I could use the content of each spool
sub-directory, and run a queue runner on each one, but then the load
runs through the roof, and the queue runners abandon the run at the
configured max load..
What would be good, is if instead of abandoning the run, exim would
pause a moment (perhaps a configurable time), recheck the load, and loop
some number (perhaps a configurable one) of times waiting for the load
to come down.
It would also be interesting if (when split_spool was configured), one
could tell a queue runner to pick one of the subdirs and process only
that. It would still lock the individual message spools as it went thru
them, but perhaps it could also leave a semaphore global to the split
sub spool, that another queue runner doing the same thing could see, and
skip that directory and go to another..
That way, one could fire up a whole lot of queue runner process, have
each one get dibs on a seperate subspool, and all process as fast as
they can, backing off to prevent runnning the load up too high.
I think this would be the fastest way to get a large queue of messages
deliverd as quickly as the machine was capable of...