Auteur: W B Hacker Date: À: exim users Sujet: Re: [exim] Duplicate emails
Dave Lugo wrote: > On Wed, 20 May 2009, W B Hacker wrote:
>> Unless 'partially obfuscated' changed more than I think it did, roughly
>> one-second processing is within reach on that one, as it is a known spam
> engine:
>
> That's an interesting comment, as I have several customers for
> which I've had to whitelist constantcontact.
Until LBL'ed they were getting past just about everything here - SA included, so
I'm more curious as to what required a whitelist than whether either of us agree
the 'value' of their content.
>
> Can we stick to just the technical issue the OP is having?
>
>> Which raises the question - how much of the problem the OP reports is
>> driven by a server spending more time than it needs to anal-izing spam
>> that could have been rejected early on smell alone?
>>
constantcontact quite aside, that observation is one I'll stand by.
I suspect the OP's thought that more powerful hardware is needed is being driven
by a situation wherein his installation is taking too high a percentage of
arrivals in to a level 'deeper' than need be - specifically leaving nearly all
filtering to the DATA phase, where, for example, calls to SA for *every* arrival
'offered' can swamp even very powerful hardware.
Whether one ends by rejecting them OR passing them (optionally sequestered),
they've still had a relatively 'heavy' SA inspection.
IF. 'Coz we don't really know if the OP is even using SA or other scanner.
But IF SO, heavy SA runs in perl may be leaving too little in the way of
resources to handle those large faxes et al in as timely a manner as needed to
keep the sender from pulling back and retrying.
>
> Spam is determined by the recipient or domain owner. You're
> neither in this case ;)
>
Doesn't matter so much *which* of it one classes as unwanted.
Matters *greatly* how 'cheaply', and which toolset (compiled Exim LBL/RBL/rDNS -
some cached - vs interpreted perl) one utilizes to *make* the determination.
If the need, for example, is to have a relatively forgiving acceptance policy,
then it makes sense to make fewer, or at least less resource intensive, tests on
certain of the characteristics. SPAM BAYES may not pay its own way, for example.
Conversely, with seriously draconian rejection of servers based on their 'lack
of credentials', (guilty here) there is plenty of time to run an SA scan, as it
is called on such a very small percentage of total 'attempted' arrivals.
Likewise your LWL and my LBL.
Totally opposite goal w/r pass/fail...
.. but same goal w/r reducing need for more complex inspection AND perhaps less
predictable outcome.
It is the improved efficiency methodology the OP can benefit from - not the
specific preference as to whether he passes or fails <whomever>