Autor: Sheldon Hearn Datum: To: Philip Hazel CC: exim-users Betreff: Re: [Exim] split_spool_directory overhead
On Fri, 07 Sep 2001 09:17:58 +0100, Philip Hazel wrote:
> It depends on exactly what degree of insurance you want. If you turn off
> fsync(), and submit your zillion messages to your queue, the entire
> submission will finish and return control to your calling script/program
> saying "OK", but the final blocks of data will still be in memory, and
> not on disc.
No problem. Typically, queue runners aren't launched until long after
queuing completes.
> If you turn off fsync() during delivery, Exim's journal of completed
> deliveries won't be flushed to disc, so after a crash you will get
> repeated deliveries because it will have forgotten what it's done.
This is the most dangerous area. If it's tolerable to cause a few
duplicate deliveries in the unlikely event of system failure, then this
isn't a problem.
> Modern hardware is pretty reliable, so I imagine (IANAE on this) you'll
> get away with it most of the time, but what "most" means I don't know.
> (How reliable is your power supply?) You then have to judge whether the
> risk is acceptable.
Power's pretty regular. The systems are colocated with backup power.
Whatever we lose in the event of power failure, we can make up in
damages. :-)
> I think this area is too risky for me actually to add a no_fsync option
> to the main source tree. It is giving people just too much rope...
No problem. This one I can carry locally. It's just hard to keep
rope-related comments to myself. *zip* :-)
> An easier way to provide it for yourself, instead of patching the code
> everywhere it occurs, would be to add the single line
>
> #define fsync(x) 0
I haven't worked at proving this, but my feeling when I opted for a
config option was that two binaries would be less efficient as far as
the buffer cache is concerned.
Given that the spool directory needs to change frequently, I already
need to use multiple configuration files.
Thanks for the feedback, Philip. I feel better now. :-)