George Ascione wrote:
>
>
>> Josip Rodin wrote:
>>
>>> On Wed, Nov 29, 2006 at 11:42:10AM -0500, Dean Brooks wrote:
>>>
>>>
>>>> Keep in mind that there *are* sites out there that:
>>>>
>>>> - Dont ever retry at all
>>>> - Retry only once after one hour
>>>> - Retry once only every 24 hours
>>>> - Treat temporary errors as failures
>>>> - etc.
>>>>
>>>>
>>> I recently encountered one ISP which retries in fifteen
>>>
>> minute intervals,
>>
>>> but only three times, before it gives up. Wackos.
>>>
>>>
>>>
>> And therein lies the rub. I think the solution is to apply
>> greylisting
>> selectively rather that to everyone. Suppose that you only applied
>> greylisting to hosts that:
>>
>> Have bad RDNS
>> In home/dynamic IP ranges
>> Fail and header test
>> Has any slightly weird HELO
>> Mail From:<>
>> Listed in Blacklists like Spamcop that are not good enough to block on
>>
>> What that you will probably block almost as much spam without
>> the delays.
>>
>
>
> In its simplest form this is a fantastic idea, thank you. Of course you
> would want to greylist given the fact that you have a reason to do so. I am
> going to immediately try to figure out how to set up the ACL to try this
> out.
>
>
I'm not yet running greylisting the way most people do. What I'm doing
is I defer all suspicious hosts on my lowest MX but accept them on the
second MX. That seems to work for me except for qmail servers that will
try the lowest MX forever.
I'm thinking I should be using greylisting instead but haven't decided
how to do it.