Re: [Exim] Performance bottleneck scanning large spools.

Top Page
Delete this message
Reply to this message
Author: Philip Hazel
Date:  
To: Gary Palmer
CC: Kai Henningsen, exim-users
Subject: Re: [Exim] Performance bottleneck scanning large spools.
On Tue, 18 Jan 2000, Gary Palmer wrote:

> > ... but then you end up with <m> queue-runners instead of one, where <m>
> > depends on the length of your queue and isn't controllable. It breaks
> > the current "one queue-runner delivers one message at a time" semantics.
>
> I'm reading an implied ``to a given remote destination'' here?


No. One queue runner runs one delivery process for one message and waits
for it to finish before starting a new process for the next message.[*]
In other words, it "walks the queue", message by message. All the code
for routing, retrying, etc. is in the delivery process, not the queue
runner. The delivery process may deliver to any number of remote or
local destinations (maybe using subprocesses). Or it may defer some
addresses, if retry times have not been reached. Exim's queue is not
organized by domain or remote host. It is just a pile of messages
awaiting delivery. Each may have multiple recipients.

> Anyone ever toyed with threading exim? I know half of you just
> spilled your coffee on your keyboard,


I just fell about laughing. It would be better to start again from
scratch. The code assumes that one delivery process handles one
delivery, and it uses global and static variables.

I didn't really know about threads when I started writing it, and
anyway, I wanted to use the fewest number of Unix features I could, to
keep it portable. In practice, I rapidly had to learn about many more
Unix features that I hadn't heard of. :-)

-------------------
[*] If you start a queue runner with -R or -Q, then it skips over
messages that don't match the criteria, but once it does select a
message, it does a full delivery.

-- 
Philip Hazel            University of Cambridge Computing Service,
ph10@???      Cambridge, England. Phone: +44 1223 334714.