Szerző: Ron Dátum: Címzett: exim-users Tárgy: Re: [Exim] hole in message_size_limit? (was: verify = header_sender ...)
On Wed, Jul 21, 2004 at 10:43:12AM +0100, Philip Hazel wrote: > On Wed, 21 Jul 2004, Ron wrote:
> > Some carefully placed new timeout limits and something like the acl
> > described above for oversize messages would probably about cover what
> > I see we could reasonably do here. Someone may suggest a better way
> > to implement the same result though.
>
> Timeout, maybe. I don't think the effort of an ACL is worth it. (There
> is nothing it could do other than "drop".)
My first guess for a sensible timeout window might be from when a
MAIL could be validly started (even if it hasn't yet) to when the
corresponding DATA completes.
This combined with smtp_accept_max_per_connection should give a
firmer upper bound to the time a host could hold a connection open.
That does also bound the bandwidth that can be consumed, but
potentially to a significantly larger value than optimum.
eg.
eximA expects to receive almost solely text email, a lot of spam
(some with large attachments), and pays (in some way) for every bit
transferred. It sets a conservative (from several angles) 1Mb
max message size.
Using a timeout, it needs to allow 5 mins or more for a slow
connection to send a legitimate 1Mb mail. If it is on a fat pipe,
someone can drop a lot of pennies in the bit bucket in that time.
It may also be worth noting that a client exceeding a timeout may
really just have timed out, but a client that ignores an advertised
size has to do so entirely of its own accord -- so these probably
should be examined according to different criteria when determining
local policy for a response.
The idea of having an acl for message overflow may be me just picking
up the first hammer I know. What I think I really want there is
simply a hook to 'admin space' that lets me do things like:
log the occurrence
examine the contents that were received
send a warning email in the case of a valid or trusted sender,
since as you noted, we are in violation of the rfc at this stage
and can't expect anything but a human to do something sensible
in response (and even then...)
block the site from further sends subject to local policy
...
Basically, we just need to be able to inform an external process
that this has occurred in a timely way, preferably in a cleaner
way than making that process scan a system log in real time,
because unless a malhost can be detected _and_ blocked, then there
is little point just limiting their mtu.
I do agree there is nothing much else we can do for the message
from the client's perspective at that stage (it sees a broken
connection) but the server still has a summary message intact
at that point and could reasonably route that fragment (or some
derivative) to any of: the original recipient, postmaster, a mail
filter process that may dynamically update a blackhole list, etc.
One or more of these entities is presumably going to have to step
in if such a message is to be dealt with conclusively without a
lot of attempted retries.
Anyway, all this is projected from the model in my head, and I've
not even been a lurker on this list previously, so my innate
terminology is surely sloppy, and my grip on reality tenuous...
I hope that illuminates the crux though, even if much of it still
looks a lot like an acl description to me.