Hi, I'm having a rare error when a mbox file reaches 2GB. I think that
the problem is the solaris because I installed the same exim (same
configure and Local/Makefile) on a freebsd platform and worked fine. But
I need to know what "defer(27)" exactly means to debug the solaris
problem. Or, if someone had the same problem and found a solution,
please tell me!
Im not using any kind of quota and the local_delivery configuration is:
local_delivery:
driver = appendfile
file = /var/mail/$local_part/INBOX
delivery_date_add
envelope_to_add
return_path_add
So.. i got this error:
== jdperon@??? R=localuser T=local_delivery defer (27): File
too large: error while writing to /var/mail/jdperon/INBOX
When I do an 'exim -d -M <message id>' I got this:
Exim version 4.71 uid=0 gid=0 pid=434 D=fbb95cfd
Probably ndbm
Support for: iconv() DKIM
Lookups: lsearch wildlsearch nwildlsearch iplsearch dbm dbmnz dnsdb
Authenticators:
Routers: accept dnslookup ipliteral manualroute queryprogram redirect
Transports: appendfile autoreply pipe smtp
Fixed never_users: 0
Size of off_t: 4
changed uid/gid: forcing real = effective
uid=0 gid=0 pid=434
auxiliary group list: <none>
seeking password data for user "root": cache not available
getpwnam() succeeded uid=0 gid=0
configuration file is /usr/local/exim/configure
log selectors = ffffffff 7fffffff
Reset TZ to GMT+3: time is 2010-01-06 16:10:38
LOG: MAIN
cwd=/ 5 args: /usr/local/exim/bin/exim -v -d -M 1NSbGh-00006t-1U
trusted user
admin user
skipping ACL configuration - not needed
set_process_info: 434 delivering specified messages
set_process_info: 434 delivering 1NSbGh-00006t-1U
reading spool file 1NSbGh-00006t-1U-H
user=dryti uid=100 gid=1 sender=dryti@???
sender_local=1 ident=dryti
Non-recipients:
Empty Tree
---- End of tree ----
recipients_count=1
body_linecount=1 message_linecount=7
Delivery address list:
jdperon@???
locking /var/spool/exim/db/retry.lockfile
locked /var/spool/exim/db/retry.lockfile
EXIM_DBOPEN(/var/spool/exim/db/retry)
returned from EXIM_DBOPEN
opened hints database /var/spool/exim/db/retry: flags=O_RDONLY
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Considering: jdperon@???
unique = jdperon@???
dbfn_read: key=R:example.com
dbfn_read: key=R:jdperon@???
dbfn_read: key=R:jdperon@???:<dryti@???>
no domain retry record
no address retry record
jdperon@???: queued for routing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
routing jdperon@???
--------> dnslookup router <--------
local_part=jdperon domain=example.com
checking domains
example.com in
"example.com:pop3.example.com:zaira.example.com:example.com.ar:pop3.example.com.ar:zaira.example.com.ar"?
yes (matched "example.com")
example.com in "! +local_domains"? no (matched "! +local_domains")
dnslookup router skipped: domains mismatch
--------> system_aliases router <--------
local_part=jdperon domain=example.com
calling system_aliases router
rda_interpret (string): ${lookup{$local_part}lsearch{/etc/aliases}}
search_open: lsearch "/etc/aliases"
search_find: file="/etc/aliases"
key="jdperon" partial=-1 affix=NULL starflags=0
LRU list:
:/etc/aliases
End
internal_search_find: file="/etc/aliases"
type=lsearch key="jdperon"
file lookup required for jdperon
in /etc/aliases
lookup failed
expanded:
file is not a filter file
parse_forward_list:
system_aliases router declined for jdperon@???
--------> userforward router <--------
local_part=jdperon domain=example.com
checking for local user
seeking password data for user "jdperon": cache not available
getpwnam() succeeded uid=106 gid=1
calling userforward router
rda_interpret (file): $home/.forward
expanded: /usr/u/jdperon/.forward
stat(/usr/u/jdperon/.)=0
/usr/u/jdperon/.forward does not exist
userforward router declined for jdperon@???
--------> lists_post router <--------
local_part=jdperon domain=example.com
checking domains
example.com in
"example.com:zaira.example.com:example.com.ar:zaira.example.com.ar"? yes
(matched "example.com")
checking senders
address match: subject=dryti@??? pattern=*
example.com in "*"? yes (matched "*")
dryti@??? in "*"? yes (matched "*")
checking require_files
file check: /usr/local/exim/lists/${local_part}
expanded file: /usr/local/exim/lists/jdperon
stat() yielded -1
errno = 2
lists_post router skipped: file check
--------> localuser router <--------
local_part=jdperon domain=example.com
checking for local user
seeking password data for user "jdperon": using cached result
getpwnam() succeeded uid=106 gid=1
calling localuser router
localuser router called for jdperon@???
domain = example.com
set transport local_delivery
queued for local_delivery transport: local_part = jdperon
domain = example.com
errors_to=NULL
domain_data=NULL localpart_data=NULL
routed by localuser router
envelope to: jdperon@???
transport: local_delivery
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
After routing:
Local deliveries:
jdperon@???
Remote deliveries:
Failed addresses:
Deferred addresses:
search_tidyup called
>>>>>>>>>>>>>>>> Local deliveries >>>>>>>>>>>>>>>>
--------> jdperon@??? <--------
locking /var/spool/exim/db/retry.lockfile
locked /var/spool/exim/db/retry.lockfile
EXIM_DBOPEN(/var/spool/exim/db/retry)
returned from EXIM_DBOPEN
opened hints database /var/spool/exim/db/retry: flags=O_RDONLY
dbfn_read: key=T:jdperon@???
retry record exists: age=29s (max 1w)
time to retry = 5h59m31s expired = 0
search_tidyup called
changed uid/gid: local delivery to jdperon <jdperon@???>
transport=local_delivery
uid=106 gid=1 pid=435
auxiliary group list: <none>
home=/usr/u/jdperon current=/usr/u/jdperon
set_process_info: 435 delivering 1NSbGh-00006t-1U to jdperon using
local_delivery
appendfile transport entered
appendfile: mode=600 notify_comsat=0 quota=0 warning=0
file=/var/mail/jdperon/INBOX format=unix
message_prefix=From ${if
def:return_path{$return_path}{MAILER-DAEMON}} ${tod_bsdinbox}\n
message_suffix=\n
maildir_use_size_file=no
locking by lockfile fcntl
lock name: /var/mail/jdperon/INBOX.lock
hitch name: /var/mail/jdperon/INBOX.lock.sol.4b44e02e.000001b3
lock file created
mailbox /var/mail/jdperon/INBOX is locked
writing to file /var/mail/jdperon/INBOX
writing data block fd=8 size=48 timeout=0
writing data block fd=8 size=384 timeout=0
write incomplete (294)
writing data block fd=8 size=90 timeout=0
writing error 27: File too large
appendfile yields 1 with errno=27 more_errno=0
search_tidyup called
local_delivery transport returned DEFER for jdperon@???
added retry item for T:jdperon@???: errno=27 more_errno=0 flags=0
post-process jdperon@??? (1)
LOG: MAIN
== jdperon@??? R=localuser T=local_delivery defer (27): File
too large: error while writing to /var/mail/jdperon/INBOX
>>>>>>>>>>>>>>>> deliveries are done >>>>>>>>>>>>>>>>
changed uid/gid: post-delivery tidying
uid=103 gid=1 pid=434
auxiliary group list: <none>
set_process_info: 434 tidying up after delivering 1NSbGh-00006t-1U
Processing retry items
Succeeded addresses:
Failed addresses:
Deferred addresses:
jdperon@???
locking /var/spool/exim/db/retry.lockfile
locked /var/spool/exim/db/retry.lockfile
EXIM_DBOPEN(/var/spool/exim/db/retry)
returned from EXIM_DBOPEN
opened hints database /var/spool/exim/db/retry: flags=O_RDWR
address match: subject=jdperon@??? pattern=*
example.com in "*"? yes (matched "*")
jdperon@??? in "*"? yes (matched "*")
retry for T:jdperon@??? = * 0 0
dbfn_read: key=T:jdperon@???
failing_interval=100530 message_age=31
Writing retry data for T:jdperon@???
first failed=1262704508 last try=1262805038 next try=1262826638 expired=0
errno=27 more_errno=0 File too large
dbfn_write: key=T:jdperon@???
end of retry processing
time on queue = 31s
warning counts: required 0 done 0
delivery deferred: update_spool=0 header_rewritten=0
end delivery of 1NSbGh-00006t-1U
search_tidyup called
search_tidyup called
>>>>>>>>>>>>>>>> Exim pid=434 terminating with rc=0 >>>>>>>>>>>>>>>>
Thanks for your help. and sorry for my english, hehe.
Cheers,
--
_________________________________________________
Juan Guillermo Bernhard
INSTITUTO NACIONAL DE TECNOLOGÍA INDUSTRIAL
DEPARTAMENTO DE INFORMÁTICA
DIVISIÓN DE OPERACIONES
Teléfono (54 11) 4724-6200 / 6300 / 6400
Interno 6739
juan@???
_________________________________________________
0800 444 4004 |
www.example.com