John Traweek wrote:
> Thanks for all of the replies. I have a lot to think about now :)
*SNIP*
> However, in regards to updating the Valid flag when an address is
> deemed invalid or has been on a 4XX for a long period of time I am a
> little foggy...
>
> I assume that I will receive two types of notifications for bad
> forward to's.
>
> 1. During the SMTP transaction with the destination server. 2.
> Receive an NDR from the destination server at some point post
> transaction
>
> How would I go about parsing these out? I assume I could simply
> parse the logs using a scheduled job or can Exim do some of this for
> me? I guess I would like this data to go into a table within MYSQL,
> so I can write a SQL job to simply go out and do some calculations
> based on frequency etc to trip the valid flag to N. I am quite good
> at SQL, but Exim and Linux are new to me, so any input would be
> appreciated. Thanks.
>
ACK. You can build a list from log parsing with externals, then lsearch
it within Exim.
OR can build a list from a custom router's 'errors-to' and such - or a
manual router, or invokded script instead of log parsing. External
massaging still needed.
Where 'list' can be flat-file or DB or translated into one from the
other for Exim's use. CDB exported from *SQL, for example, is robust and
more efficiently parsed than a flood of direct SQL SELECT.
Not necessarily recommended, but just as one can do a SELECT, one can
also do an SQL 'UPDATE' or 'INSERT' in an acl or router.
(example only - unrelated function to your use):
set acl_c19 = ${lookup pgsql{INSERT into brownlist \
(pg_when, pg_why, pg_ip, pg_host, pg_where)
NB: Observe that I make a practice of naming my DB fields with 'pg_'
prefix. It really helps debugging acl clauses when it is very clear what
is an Exim variable and what lives in the DB, yet makes it easy to use
otherwise-same-name in matching, as in: $local_part and pg_local_part,
rather than some field name sourced from the planet beyond Saturn.
YMMV,
Bill