Re: [Exim] Performance bottleneck scanning large spools.

Startseite
Nachricht löschen
Nachricht beantworten
Autor: Kai Henningsen
Datum:  
To: exim-users
Betreff: Re: [Exim] Performance bottleneck scanning large spools.
ph10@??? (Philip Hazel) wrote on 11.01.00 in <Pine.SOL.3.96.1000111092033.28007H-100000@???>:

> On 10 Jan 2000, Kai Henningsen wrote:


> Sorry, I meant by means external to Exim such as disc systems that cache
> more things in memory (and tweaks to parameters as suggested by another
> poster).


Well, I suggested at least using the noatime option. Otherwise, Linux (we
were talking about Linux, yes?) does fairly good in 2.2 because of the
dentry (directory entry caching) layer. And whatever memory is free can
get used for cache. Just add lots.

> > Here's another idea. Would it be possible/useful to create a queue runner
> > mode in which the queue runner does _not_ read the whole spool, but just a
> > reasonable part of it?
>
> That is an interesting idea. I'm not sure how one could implement it for
> a non-split spool, though.


Personally, I'd say whoever needs this, also needs a split spool. Without
large queues, both options are really not needed; and with large queues,
you really want a split spool.

> > Oh, and I suspect for really large queues, you'll want to split into more
> > subdirectories.
>
> I'm afraid my reaction to that is: Aarrgghh!! :-)


Well, the simplest possible variant is to split on n characters instead of
on 1 character. I've used that in other applications, and it seems to work
well. And if you scale off that, I suspect nothing based on a normal file
system will work.

Though I note that squid does use a two level split, with 16 directories
at the first and 256 at the second level.

MfG Kai