Re: Nightmare with exim

Página superior
Eliminar este mensaje
Responder a este mensaje
Autor: Philip Hazel
Fecha:  
A: Christoph Lameter
Cc: dirk, Exim List
Asunto: Re: Nightmare with exim
On Wed, 14 May 1997, Christoph Lameter wrote:

> We installed exim 1.62 on a Linux system (Debian 1.3) and tried to use it
> to sent out e-mail regarding a TV series "Mad about you". We needed to
> sent out 100.000 mails and since I have had around 80.000 on some mail
> exploder I thought it would not be such an issue.


I do hope this wasn't unsolicited mail. The plague is getting worse. We
have added 30 addresses to our block list this week alone.

> This in itself brought a load of 3 on the system. The message queue grew
> to 25.000 entries (in part because of some permission issues with the mail
> directory of the user who got the errormessages back).


I believe Linux is one of the systems that suffer performance problems
when there are very large numbers of files in the same directory. I am
planning to upgrade Exim so that it can be configured to split the mail
spool directory into 62 subdirectories. This is not actually going to be
very much work; I just have to find the time to do it.

> Delivery for off site messages was also very slow. We could never get exim
> to deliver more than one message per second.


3,600 per hour. Hmm. John Henders reported yesterday on a system that
was doing about double that, but he was using a dual processor sparc 20
with 512 meg of ram. Of course, the speed of the network link is also
relevant.

> Finally it was decided that
> exim is unsuitable for such a purpose. Qmail is lurking on the horizon...


I do not believe that any one mailer can suit everybody. There are
several to choose from. I have no problem with people deciding that some
other product suits their purpose better than Exim. Heck, I'm not trying
to sell it to make money... :-)

> Is there any way to get exim to do faster deliveries?


Bigger box? Faster network connection?

> The queue management of exim is a catastrophe for larger projects. Its is
> slow and unreliable and unmanagable.


I must suffer from an imagination deficiency, because I never considered
situations where there were 25,000 messages on a spool. Here we rarely
have more than a few hundred at the worst of times, so I was thinking a
few thousand would be the extreme. I think the slowness may in part be
due to the file system problem mentioned above.

> Also had segfaults due to a corrupt Berkeley DB database. After
> removing the database things went almost back to normal again.


The corruption of DB databases has never been satisfactorily explained.
(The segfaults happen inside the DB functions.) Another thing I hope to
get to fairly soon is to investigate the new release of Berkeley DB, as
well as gdbm, and also a general look at the database area. But as
always, there is too much to do and not enough time.

Philip

-- 
Philip Hazel                   University Computing Service,
ph10@???             New Museums Site, Cambridge CB2 3QG,
P.Hazel@???          England.  Phone: +44 1223 334714