thats easy enough to achieve though.
#/usr/bin/perl
$descount = shift;
while(1) {
$count = `ps auxw| grep -v grep |grep -c -- "exim -q";
if ($count < $curcount) {
while($i < ($descount - $count)) {
system("/usr/exim/bin/exim -q &");
$i++;
}
}
sleep 30;
}
12 lines of perl. And if your queue is empty, they start up, find no
work and exit. No loss.
George
michael@??? wrote:
>
> > > I rather see exim fork of say 100 processes and process the queue with
> > > all of them.
> >
> > There is nothing to stop you running a script that obeys "exim -q" 100
> > times if you want Exim to do this. Or set queue_run_max=100 and have the
> > daemon start queue runners every 30 seconds or whatever.
>
> Using a script, you have to count queue runners for not starting too many.
> Using exim, it is easy to control the number of queue runners, but in
> your example it will take 3000 seconds until all 100 queue runners are
> active, assuming that they run this long. Up to that point, mail will
> stay unprocessed in the queue that could be delivered, if there would
> be enough queue runners.
>
> It would be great if I could start more than one queue runner at once from
> exim, but I am not sure how many I would want to start at which point.
> Perhaps keep starting queue runners with a small delay until the first
> one returns with an exit code that states that it did not find any work?
>
> I have to admit that I like the way qmail solves this. You simply
> specify the remote concurrency, that is the number of simultaneous
> remote deliveries. Experience tells pretty fast how much a machine can
> perform without getting overloaded, you set the concurrency a little
> below and that's it.
>
> Michael
>
> --
> ## List details at http://www.exim.org/mailman/listinfo/exim-users Exim details at http://www.exim.org/ ##