Ian P. Christian wrote:
> 2009/2/3 W B Hacker <wbh@???>:
>> Method depends on the 'why' of what you have now.
>>
>> Are you not triggering queue runners often enough?
>
Previous reply - I'd say not...
> Hi Bill,
>
> Thanks for your answer.
>
>> Is your queue full of icebergs in the form of undeliverables you should
>> have rejected up-front?
>
See below - but from the numbers, I'd bet on too much garbage being
admitted and allowed to go too deep.
> The queue quite possibly is full of stuff that shouldn't be there,
> and I am addressing that as part of a larger scale project - but one
> bit at at time. I work for an ISP that is growing reasonably quickly,
> so I want to build scalability into the platform.
>
> We are talking about pretty significant numbers of mails, apparently
> in the region of 1.5million a day, but unfortunately (again, something
> I'll address) I can't easily read that figure some anywhere. As for
> what hits fallback, it's only about 100k a day.
>
OK first things first. Under that heavy a load it can be challenging to
turn-up the heat on logging detail and not run out of disk space or CPU,
but you'll need to do it, at least selectively, and even quiet times of
day should show if your acl's are at their best.
Have a look at:
log_selector = +all -<something> -<something else>
I use:
log_selector = +all -all_parents -queue_run -arguments
and
syslog_duplication = false
Therafter ,'grep' 'exigrep' and eximstats (easily aimed at specific
patterns) are helpful tools.
Ex:
grep -c 'connection count' /var/log/exim/mainlog
for attempted arrivals - accepted or rejected.
A more detailed log will show you where next to look.
>> Are your retry rules altered from default?
>
> Not significantly.
>
>> Do you have reliable DNS resolvers available?
>
> Yes, local DNS servers are used, with decadency built into them.
>
>> Are you saturating your bandwidth, or is it so dodgy that traffic cannot
>> reliably get out on each attempt?
>
> No, we aren't. There server isn't even shifting 1mbit.
>
Adds more weight to why I suggested more frequent queue runnning.
>> Exim should be able to transit around 100,000 typical messages a day and
>> not stress typical hardware enough to kick the fans into high-speed.
>
> This server in question is actually only a dual P3-800, however CPU
> usage sits about 50 percent idle.
>
> Looking at 'dstat' (great tool btw for those who haven't used it), I
> think most of the problem is receiving hosts being slow.
>
> # dstat
> ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
> usr sys idl wai hiq siq| read writ| recv send| in out | int csw
> 56 3 38 2 0 0|7452B 307k| 0 0 | 0.1 0.2 | 44 310
> 59 2 31 2 0 5| 0 544k|4909B 5126B| 0 0 | 77 382
> 50 9 34 6 0 1|8192B 640k|5416B 3592B| 0 0 | 93 777
> 47 5 39 7 0 0| 0 768k| 14k 5018B| 0 0 | 110 584 ^C
>
Problem with dstat and the like is the give you data, not information.
Eg 'what' shows but 'why' does not.
exiwhat and eximstats are more relevant.
> related exim settings:
> remote_max_parallel = 20
> queue_run_max = 100
> split_spool_directory = true
> ignore_bounce_errors_after = 24h
> timeout_frozen_after = 7d
> auto_thaw = 8h
>
>
==== ours are: =====
# MAIN_4: Resource and load management controls
#
log_selector = +all -all_parents -queue_run -arguments
syslog_duplication = false
smtp_enforce_sync = true
smtp_max_synprot_errors = 2
smtp_max_unknown_commands = 2
smtp_ratelimit_hosts = !localhost : *
smtp_ratelimit_mail = 2,0.5s,1.05,4m
smtp_ratelimit_rcpt = 4,0.25s,1.015,4m
smtp_accept_max_nonmail_hosts = 3
smtp_accept_max_nonmail = 3
smtp_accept_max = 100
smtp_accept_max_per_connection = 3
smtp_accept_max_per_host = 2
smtp_accept_queue_per_connection = 10
smtp_load_reserve = 10
smtp_connect_backlog = 50
smtp_receive_timeout = 10m
smtp_reserve_hosts = [redacted]
queue_only = false
queue_run_max = 400
auto_thaw = 20m
remote_max_parallel = 10
tcp_nodelay = true
=========================================
> # ps aux | grep "exim[4]* -q" -c
> 67
>
At 140 or so we run out of PostgreSQL connections...
> Exim is run with ' -bd -q15m'
>
-q55s here
> Hints db is in tmpfs.
>
Decent SATA RAID.
> I suspect that not enough queue runners are started to be honest, I've
> done a lot of tweaking of the config today (which is years old) - and
> the queue is slowly being reduced.... slowly :)
>
> How many queue runners would you recommend here?
>
See above. I seldom see more than a handful because the trick is not to
run many-at-once (all begging for disk positioner activity) but many in
sequence - a sharp snatch off disk, an unavaoidable longer wait while
the far-end picks its nose, counts its toes, and finally take the
traffic on-board.
> I'm a little puzzled as to why I had stuff in my mailq which was over
> 30 days old when my retries are :
>
> F,8h,15m; G,16h,1h,1.5; F,30d,6h
>
* * F,10m,2m; G,2h,5m,1.1; F,4h,30m
We 'fail' at 4 hours, save for one client - 20 *minutes*
Business or personal, folks want to know 'NOW' if there is a problem
while the banks are still open, the phone will be answered - whatever.
Mail servers are no longer at the end of 4 WPM undersea cable, nor
coal-fired. Nearly all WILL be up, and WILL answer on the first go.
> This was before I added auto_thaw though, so perhaps they were just
> frozen. I removed older messages earlier, and a few other things to
> bring the queue down to 30k messages - it seems my changes have things
> reasonably well under control now.
>
> However... lets consider it an academic exercise if you don't mind.
> How would I achieve what I wanted to do, even if I don't need to do
> it?
>
See above.
Step two is that no matter how powerful a server or how efficient an MTA
config, you *really really* should break up these overlarge user
communities into smaller 'chunks' on separate boxen.
A rack full of VIA Nano / Intel Atom or ARM RISC mini's each with a
proper subset by domain or whatever vs one or two GodzillaBoxen...
... means that any single failure has a more manageable number of shrill
phone calls hitting you just when you are trying to buy enough peace and
quiet to fix the problem.
Put 15,000+ users on the same box and life gets hectic when anything
belches. A hundred will be on the teat at any given time of 'workday.
Remember - they might not miss an MTA for hours. They'll miss POP or
IMAP in *minutes*.
> Thanks, and sorry for the long post!
>
'Igualmente'
Bill