Szerző: Kai Henningsen Dátum: Címzett: exim-users Tárgy: Re: [Exim] Performance bottleneck scanning large spools.
vadik@??? (Vadim Vygonets) wrote on 04.01.00 in <20000104154255.F4433@???>:
> Quoth Philip Hazel on Tue, Jan 04, 2000:
> > On Tue, 4 Jan 2000, Vadim Vygonets wrote:
> >
> > > > Fine, a queue runner creates the shared segment, but who updates the
> > > > shared segment as messages arrive and depart, so that it reflects
> > > > reality when the next queue runner process finds it already exists?
> > >
> > > No-one, but the queue runner will update it when it wakes up
> > > next.
> >
> > ... er, but in order to do that it has to scan the spool directories to
> > find the files, and I thought that was what this whole scheme was trying
> > to avoid?
>
> Alright, then... It's possible to have a Master process and
> update the Master's list without having shared memory, but using
> UNIX domain sockets. But still, I'm not sure it's a nice idea.
Exim has this db directory.
You could keep a copy of the spool *directory* in a database - Exim
already knows how to do stuff like that, and how to keep it up-to-date.
And you can have it do a directory rescan once in a while to make sure no
message got left out (put the last-rescan time into the database and make
the interval configurable - maybe default to once a day). And of course
regenerate the db if it's missing.
No need to learn about shared memory. It's slower, but should still be
significantly faster than scanning directories. And you can cache whatever
info the queue runner needs to decide which message to tackle next.