On Thu, 2 May 2002, Dave C. wrote:
> If I run several queue runner processes, it goes slightly faster, but
> the incremental gain seems to be lost due to them stumbling over each
> other ( eg, "another processs is handling this message")
As was said, this isn't a great expense. You can, if you really want,
restrict each runnter to a subset of the queue, but I don't think it's
worth it.
> I use split_spool, so have thought I could use the content of each spool
> sub-directory, and run a queue runner on each one, but then the load
> runs through the roof, and the queue runners abandon the run at the
> configured max load..
With split spool, the queue runners should tackel each subdirectory one
by eon.
(Sorry about the typos, I'm on a rahter slow satellite link.)
> It would also be interesting if (when split_spool was configured), one
> could tell a queue runner to pick one of the subdirs and process only
> that. It would still lock the individual message spools as it went thru
> them, but perhaps it could also leave a semaphore global to the split
> sub spool, that another queue runner doing the same thing could see, and
> skip that directory and go to another..
Scope for getting stuck, deadlock etc. I don't think you'd gain much.
--
Philip Hazel University of Cambridge Computing Service,
ph10@??? Cambridge, England. Phone: +44 1223 334714.