On Wed, Nov 16, 2011 at 5:07 AM, Dean Bishop <dbishop@???> wrote:
> Hey guys,
>
> This all makes sense and matches my test results and reading...sadly. So the current iteration is below. It works in all respects except that it duplicates the archived copy of messages send from a locally hosted account to a locally hosted alias. The copies are identical in every way. I've added a header in the transport archiver but the router seems to ignore this added header. If I'm not mistaken this is just the way that routers work. Ignoring anything but original message headers. Is this an accurate assessment? Is there any way to add something to a message that the router can use as a flag in a condition statement? A filter maybe?
When the message matches the alias router and gets rerun through the
routers, it still has the same mail queue id. In the archive router,
check if a flag file exists, do the archive if not, create the flag
file (named with the queue name, something like /tmp/exim/$queueid).
Then if it matches an alias router, when the email goes through the
routers again, the flag file exists for that mail queue id so
archiving will be skipped. Write a script that runs hourly that
cleans that /tmp/exim/ subdirectory for anything older than an hour).
Feel free to substitute any type of storage that you're willing to use
instead of flag files, such as mysql or memcache or mongodb. Memcache
can be accessed using Mike Cardwell's method of raw memcache socket
access (search the archive via google) so no external perl modules or
programming would be needed. Personally I tend to use memcache and
perl because I have memcache machines here.
Figure out what is best for you.
Regards... Todd
--
If Americans could eliminate sugary beverages, potatoes, white bread,
pasta, white rice and sugary snacks, we would wipe out almost all the
problems we have with weight and diabetes and other metabolic
diseases. -- Dr. Walter Willett, Harvard School of Public Health