[exim] Rate limiting: /strict and /leaky

Página Principal
Apagar esta mensagem
Responder a esta mensagem
Autor: Brian Candler
Data:  
Para: exim-users
Assunto: [exim] Rate limiting: /strict and /leaky
I've been having a think about the /strict and /leaky options to rate
limiting.

Firstly, I'd like to check I understand it correctly. Let's say I have a
site configured with /strict checking on RCPT attempts, and their limit is
set to 10 per hour.

If they send a burst of 10,000 mails then the first 10 will get through, and
the remaining 9,990 will be rejected. However, is it true that it will also
take ~999 hours before they are allowed to send any more mail, even if they
make no further attempts in the mean time?

If so, that gives a rather easy denial-of-service attack against a customer
site. If they have a formmail CGI, then you just post to it tens of
thousands of times and you could have them blocked for weeks.

With /leaky, some mail will get through. e.g. if you continuously hit
formmail many times per minute, then still over an hour 10 mails will get
through. That's not ideal because an abuser hitting a formmail script hard
will still be able to get *some* benefit.

I guess what I'd really like is something in between, which results in:
- if a continuous attack is in progress, then no mail at all gets through
- once the attack has stopped, then mail can flow again, without having
been excessively penalised for the number of failed attempts during the
attack

One way might be to 'clip' the number of mail sending attempts recorded in
the ratelimit database, e.g. if message limit L per period P then always
update the value in the database as strict does, but clip it to (say) 1.1L
or 2L. An attack will keep the value topped up above L, but it can quickly
recover afterwards.

However, it would also be nice for exim_dumpdb to show the actual rate
values while an attack is in progress, as /strict does now. So perhaps it
would be possible to store the actual rate, but arrange for a much faster
recovery time when the current rate is over the limit? Could you fiddle the
time constant so that the recovery time from current rate R back down to
limit L takes roughly time P, no matter how much R is above L? The maths of
this is beyond me :-)

Anyway, it's just a thought. I have decided to go with /leaky for now to
avoid the denial-of-service attack outlined above, but it would be nice to
tighten this up later.

Regards,

Brian.