Autor: James P. Roberts Data: A: Alan J. Flavell, Exim users list Assumpte: Re: [Exim] Re: Bagle, unqualified HELO, time delays
----- Original Message -----
From: "Alan J. Flavell" <a.flavell@???>
<snip> > Finally, I did some tests last night and it seems that this particular
> engine (i.e Bagle) is always producing a triple of SMTP calls, and
> then gives up. The timestamps of this triple are not very accurately
> spaced, but from the typical 20-40sec intervals between them, my
> explanation of what the engine is doing is that it times out after
> perhaps 20 seconds and immediately starts retrying. After trying
> three times it gives up.
>
> So, in the present incident it seems that even a delay of 30s is
> adequate to shrug it off (although one has to respond to three of them
> before they're done). But - as our delays are shrugging off other
> kinds of malefactor too - I'm leaving the timeout at its present
> setting, which, as I say, is above 60s but well clear of the
> RFC-suggested max of 5 minutes. Remember, this is only applied to a
> limited category of callers who have not been outright rejected but
> have already triggered cause for suspicion. Most bona fide senders
> would remain unaffected by it.
>
> Hope that's useful.
>
Would it make sense to apply a 30 second-ish delay to every incoming
connection, simply to weed out the virii? Since any legit MTA should accept
such a delay without incident, and most virii engines would give up.
After a little thinking, it occurs to me, this could work even for some
high-volume servers. If every single connection is delayed the same amount,
it would behave as a pure transport delay. After the initial delay, the same
number of connections would complete per second, they would all simply
complete 30 seconds late. You would have to tweak the max number of
simultaneous connections allowed, to account for the number of connections
typically initiated in a given 30 second chunk.
Once the 30 second pipeline is filled, connections would complete at the same
rate as new connections come in. The actual messages per minute capacity of
the machine would be unchanged! The only difference would be the extra number
of connections held open at any given moment.
It would be similar to a fixed delay in a television broadcast. There is no
loss of bandwidth in the broadcast signal. But the broadcaster has to allow
for a "buffer" to hold the signal data for the delay period.
Unless someone thinks of a flaw in my reasoning, I believe I may try this. If
I do, I will let you know how it turns out. I am, however, a low-volume
server, so YMMV.