On Mon, 4 Dec 2000, Yann Golanski wrote:
>
> No. Totaly unpractical and insane. Why? well, volume of logs will kill
> your database really well. Secondly, your database will be constantly
> updated (which is slow, or so I am told by DB experts) thus your machine
> needs to be pretty powerfull. Thirdly, if you arrange your logs properly
> and gzip (or bzip) them, then a few perl scripts can do all you want --
> and they are not hard to write. If you want speed, then C and time is
> all you need.
Very true. I was just thinking it would be a lot easier to search through
a db than log files when you are trying to reference a date months ago.
> If you want logs on one central server, then why don't you mount a NFS
> file store and write logs there. Or scp logs to a central log
> repository?
Thats another option which I was considering.
We will be using a NFS mount anyway.
> As for log searches, how many do you *really* do?
Our support dept. relies heavly on Exims logs when it comes to supporting
ETRN clients.
> Perl and NFS/scp would be what I would suggest. It's easier to
> administer as well.