Autor: Theo E. Schlossnagle Data: Para: Yann Golanski CC: Warren Baker, exim-users Assunto: Re: [Exim] Logging
Yann Golanski wrote: >
> On Mon, Dec 04, 2000 at 03:07:40PM +0200, Warren Baker wrote:
> > Do you think it would be feasible to have the ability to let
> > Exim log to a MySQL/Postgres/<whatever> db ?
We do it. Not in real time, but it is neither impractical nor insane to do
so. We devlier about 1500 messages per second at peak (which as I understand
is more than most people.) This equates to 2-3 log inserts. 4500 inserts per
second on a well tuned database is completely feasible. Will you need a beefy
machine? Of course, but anyone who really needs to process logs this way must
have relatively large pipe and can probably afford a big enough box to run the
db.
Technically, this should be a very simple coding effort within exim. The
mysql facilities are already there and the logging is already. You just need
to add a few exim.conf parameters and hook into the log code for certain log
events -- all of the plumbing is already inside of exim.
> No. Totaly unpractical and insane. Why? well, volume of logs will kill
> your database really well. Secondly, your database will be constantly
> updated (which is slow, or so I am told by DB experts) thus your machine
> needs to be pretty powerfull. Thirdly, if you arrange your logs properly
> and gzip (or bzip) them, then a few perl scripts can do all you want --
> and they are not hard to write. If you want speed, then C and time is
> all you need.
C and processing time :-) Without a centralized index, there is no way to
find the transaction logs relating to mail to joebob@???. I need to
to know message ID and then all of the events relating to that message ID. It
would be nice to know what is the average delivery time for messages to
yahoo.com. This is a lot of work that a database is meant for. You can even
do regressions -- if you change your peering relationships with providers, how
did your delivery time for certain domains change over the past several days.
No matter how you arrange your logs, you cannot do adhoc queries on them
quickly or easily unless you have a formal structure around them. A database
is a damn good solution.
> If you want logs on one central server, then why don't you mount a NFS
> file store and write logs there. Or scp logs to a central log
> repository?
>
> As for log searches, how many do you *really* do?
We do a lot. Hundreds to thousands per hour. Maybe you don't run that large
of a site, but we send at least 14 million emails each day. If someone
doesn't get the email that they expect to get, we need to know why. So, we
(actually they -- through a web interface) do a look up and see the entire
transaction from Exim's perspective. It is great!
It is feasible to do with perl, but we have more than 10 outbound mailserver
and I don't want to go looking on each one. Besides, a MySQL database is
exactly what I am looking for. Basically normal Exim logs with an index on a
few of the fields and an SQL query engine.
> > Maybe I am heading in the wrong direction but how does everyone else
> > handle centralized logging in a clustered environment without forgetting
> > about client services ?
We post process logs into MySQL or Oracle (depending on the site). So, we
have to wait for 24 hours to see our logs in a good format. We do it out of
cron from the central datbase machine. It ssh's into the Exim boxes and
processes the mainlog into the database using a bulk loading tool (like
SQLloader.) We do it once a day at our lowest traffic time, but there is no
reason you couldn't do this every 15 minutes or so.
--
Theo Schlossnagle
1024D/A8EBCF8F/13BD 8C08 6BE2 629A 527E 2DC2 72C2 AD05 A8EB CF8F
2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7