[ On Sunday, July 6, 2003 at 14:37:19 (-0500), Jerry Jorgenson wrote: ]
> Subject: Re: [Exim] error response rate limiting vs. overloaded systems
>
> But that's not the problem, except for some misconfigured mailers, which
> aren't in the majority, or even that frequent.
Perhaps you've forgotten how this thread started? It is a good thing
that misbehaved mailers (accidentally or maliciously) are in the
minority. However they are present and they do cause very serious
problems as all readers should know by know. The point is to avoid even
their infrequent problems by automatically making them into a tiny
pitter-patter of noise well below your threshold of hearing.
> > Holding a connection open and idle for some time before you send the
> > last line of an error response will save you massive amounts of
> > resources at only a tiny expense. Pay a little, save a lot.
>
> And prevent your normal customers from getting their legitimate mail.
Clearly you're not paying attention to the big picture. Normal e-mail
is not _ever_ adversely impacted by proper use of error response rate
limiting, not even if a poorly scaled and/or tuned server has to hand
out 421 replies on connect for a short while.
> And how would you differentiate between the black and white cars, when
> some white cars (e.g. large ISPs) act like black cars, but are legitimate,
> and you absolutely must let them through without any delay (but there are
> way too many of them to put on an exception list).
If that's what you think is happening then you've still not grasped the
big picture here. If a large ISP is hammering your server with
transactions that only generate errors then they are running a
misbehaved mailer and by rate-limiting their connections by forcing them
to wait for the error response you will actually solve the problem they
are causing you. This really does work, regardless of how many
connections per second your server accepts on average. Rate-limiting
transactions which only generate errors can only ever help reduce your
overall system load -- it CAN NOT _EVER_ hurt, at least not in any
otherwise properly scaled and tuned system. If you don't believe this
then please try it for yourself instead of giving baseless arguments.
I've done this and I've experimented at different scales and I've had
nothing but success. I went down this road in the first place because I
have a lot of experience doing this same kind of thing in many different
data communications scenarios and I knew it would work before I even
started -- to me it was plainly obvious.
> If the delay method worked (and it's been tried, but it failed to achieve
> the results you postulate),
It does work. I'm not postulating -- I'm talking about real-world
systems. I've measured the effect across a range of different systems
and I repeat: the bigger the system the bigger the savings. There is
not even a hint of any swing in the curve of the graph, and no valid
theory which even suggests there might be.
If you've tried error response rate limiting and you think it failed
then the real problem was with your implementation and you cannot
possibly know what you're talking about. Please go back to the books
and try again once you've learned how to do it properly.
--
Greg A. Woods
+1 416 218-0098; <g.a.woods@???>; <woods@???>
Planix, Inc. <woods@???>; VE3TCP; Secrets of the Weird <woods@???>