ph10@??? said:
} It would be interesting if somebody did some performance tests. Small
} lsearch files will fit in a single disc block and one feels that a
} linear search of up to say 50 or maybe even 100 entries should be
} quite fast. But I don't actually *know*.
It also depends on the loading of the system and how much memory you have.
If the machine is not being pushed into swap and the machine moves a
reasonable amount of mail, then much of the time the lookup files will be
in the buffer cache and no disk access is needed at all.
I've done a few comparisons of cdb against Berkeley db, and found that
with the tweaks in exim cdb is rather faster and lighter on CPU (straight
cdb is slower than db because the seeks appear to blow your memory
buffering, but the version in exim mmaps everything for speed, and also
caches the last entry or so which gives a suprisingly large hit count).
I would guess that the break would be around 100 entries.... but that is a
guess. For @@ partial type lookups I would guess that the break point
would be lower. My system has a few thousand entries in most cases, plus
you also get a performance gain on the cdb by having all of the cdb files
mmap readonly shared between processes, which cuts down on overall system
memory use.
Anyone want to do real tests??
BTW my tests were all done on Linux using a 2.0.x kernel and both SPARC &
intel architectures. Other systems will have different characteristics
although broadly comparable.
Nigel.
--
[ Nigel.Metheringham@??? - Systems Software Engineer ]
[ Tel : +44 113 207 6112 Fax : +44 113 234 6065 ]
[ Real life is but a pale imitation of a Dilbert strip ]
--
*** Exim information can be found at
http://www.exim.org/ ***