| What would be nicer is to pull these numbers into rrd/MRTG and draw pretty
| graphs for some trend analysis, but I never seem to get time to do that
| bit. Anyone doing that?
I've seen some schemes that involve re-running the whole stats job every 5
mins, which seems awfully wasteful. Better to have a long-running process
which essentially does a "tail -f" on the log and increments counters as
appropriate.
I remember ages ago Nigel suggested the perl File::Tail module. This
*almost* worked for us but, at least on solaris, seemed to occasionally
get confused and rewind back to start of file for no apparent reason -
causing a huge spike in the graphs. Never found a solution.
We now just do the "tail -f" thing ourselves in perl - something like:
for (;;)
{
while ( <LOG> )
{
# while input present, process a log line
# etc
}
# write out the stats for mrtg
# wait a bit
sleep 60;
# Clear end-of-file condition ( with a seek no-op ):
seek(LOG, 0, 1);
}
This sort of scheme has other uses. An experimental script based on this
detects local (non-MTA) systems sending large amounts of mail thru our
smarthosts. It recently alerted us where one of our windoze boxes had
been turned into a spam-zombie, long before our smarthosts got into
spamcop et al...
--
Chris Edwards, Glasgow University Computing Service