Re: [Exim] Performance bottleneck scanning large spools.

Página Inicial
Delete this message
Reply to this message
Autor: Dave C.
Data:  
Para: Theo Schlossnagle
CC: exim-users
Assunto: Re: [Exim] Performance bottleneck scanning large spools.
If you don't already, I'd try setting this option:

|
| split_spool_directory

|
|    Type:    boolean
|    Default: false

|
| If this option is set, it causes Exim to split its input directory
| into 62 subdirectories, each with a single alphanumeric character
| as its name. The fifth character of the message id is used to
| allocate messages to subdirectories; this is the least significant
| base-62 digit of the time of arrival of the message.

|
| Splitting up the spool in this way may provide better performance
| on systems where there are long mail queues, by reducing the number
| of files in any one directory. The msglog directory is also split
| up in a similar way to the input directory; however, if
| preserve_message_logs is set, all old msglog files are still placed
| in the single directory msglog.OLD.

|
| It is not necessary to take any special action for existing
| messages when changing split_spool_directory. Exim notices messages
| that are in the 'wrong' place, and continues to process them. If
| the option is turned off after a period of being on, the
| subdirectories will eventually empty and get deleted.

|

On Thu, 30 Dec 1999, Theo Schlossnagle wrote:

> I have a problem... this is a tough one perhaps...
>
> I am running exim in a production set up and we are sending out around 6
> million emails (unique and individually addressed) per day. This has
> worked fine until now. We are slightly overloading the systems now and
> they can't keep up. If we get a spike in the flow of emails, the queue
> size jumps up to 200,000 messages on the machine that saw the spike.
> Usually after a spike, we have a lull, so one would think that exim will
> clean up what it couldn't handle (we have it attempting immediate
> deliver, but if it has too many messages to send it will queue those it
> cannot handle).
>
> In order to do this I have to fork a few hundred queue runners (no
> problem).
>
> Here is the issue.
>
> Exim takes O(n) time to start where n is the size of the queue.
> (directly proportional). It appears to be reading the entire queue (not
> contents) but directories and msglog information. Is there a way around
> this? It takes up to 5 minutes sometimes (because I have 200 processes
> reading 100s of MB off disk, albeit the same 100s of MB). In those 5
> minutes, I could have sent out about 30000 messages (6 machines * 300
> seconds * 18 messages/second).
>
> The way I would fix it is to map a decent sized sahred memory (sysV)
> segment and keep the info found there, so as long as there is at least
> on exim process running, the spool directory doesn't need to be
> scanned. I would wager this would be a lot of hacking.
>
> So, any suggestions? I thought about loading the directories (not the
> files, just the entries [name->inode list]) onto a RAM disk, but I would
> have to write a small kernel module to do that (Using Linux by the way).
>
> Please tell me I missed something in the docs and I can sa something
> like
>
> full_spool_scan = false ;)
>
> In dire need of assistance. Thanks!
>
> --
> Theo Schlossnagle
> Senior Systems Engineer
> 33131B65/2047/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7
>
>
>
> --
> ## List details at http://www.exim.org/mailman/listinfo/exim-users Exim details at http://www.exim.org/ ##
>