On Mon, Dec 04, 2000 at 03:07:40PM +0200, Warren Baker wrote:
> Do you think it would be feasible to have the ability to let
> Exim log to a MySQL/Postgres/<whatever> db ?
No. Totaly unpractical and insane. Why? well, volume of logs will kill
your database really well. Secondly, your database will be constantly
updated (which is slow, or so I am told by DB experts) thus your machine
needs to be pretty powerfull. Thirdly, if you arrange your logs properly
and gzip (or bzip) them, then a few perl scripts can do all you want --
and they are not hard to write. If you want speed, then C and time is
all you need.
> From a client services and reporting point of view, it would be easier to
> extract logs from a db with a couple of SQL commands which could be
> output to a webpage, using php/perl etc. One wouldnt have to worry about
> log rotation (this would obviously depend on how busy your server is).
> In a multiple servers environment, logs could be written to one central
> db. This function would give support the ability to search on dates
> amongst other things.
If you want logs on one central server, then why don't you mount a NFS
file store and write logs there. Or scp logs to a central log
repository?
As for log searches, how many do you *really* do?
> Maybe I am heading in the wrong direction but how does everyone else
> handle centralized logging in a clustered environment without forgetting
> about client services ?
Perl and NFS/scp would be what I would suggest. It's easier to
administer as well.
--
Dr Yann Golanski Senior Developer
Please use PGP: http://www.kierun.org/pgp/key-planet