Re: exim's retry database acting up

Página superior
Eliminar este mensaje
Responder a este mensaje
Autor: Christoph Lameter
Fecha:  
A: Piete Brooks
Cc: Philip Hazel, exim-users
Asunto: Re: exim's retry database acting up
On Mon, 6 Jan 1997, Piete Brooks wrote:

Piete.Brooks >ZAP the retry database, and do a queue run ...

That does not sound too good for exim....

Piete.Brooks >> What exim needs to do is to deliver ALL messages when the first
Piete.Brooks >> .forward file from those homedirectories was successfully read.
Piete.Brooks >
Piete.Brooks >*YOU* know that, but how should exim ?
Piete.Brooks >exim doesn't keep track of *why* it failed, but in particular, how does it
Piete.Brooks >know that all .forward files are on a single server ?
Piete.Brooks >(we have multiple home di servers ....)

I dont know. But some scheme like tying it to the userforward director
need to be there.

Perhaps those messages could be bundled by specifying retry rules for
*paths* in the retry configuration section?

Piete.Brooks >I run multiple processes "exim -q & exim -q & exim -q & exim -q &"

That does also not sound too good for exim....

Piete.Brooks >> Could exim timeout within 10 seconds on those slow messages, fork them
Piete.Brooks >> into a separate process and continue delivering the rest of the queue
Piete.Brooks >> faster?
Piete.Brooks >
Piete.Brooks >What if you have (say) lost your internet connection and have (say) 10K
Piete.Brooks >hosts to which email is to be sent -- you want 10K processes ?

Of course I want an upper limit on the concurrent processes. Perhaps a
slow queue and a fast queue would be all that is needed?

--- +++ --- +++ --- +++ --- +++ --- +++ --- +++ --- +++ ---
PGP Public Key = FB 9B 31 21 04 1E 3A 33 C7 62 2F C0 CD 81 CA B5