I'd suggest that you have a look at the way that usenet news evolved in
the last few years of the 90s (not been involved with it since, although
I get the feeling it stagnated after that).
There was a definite split into types of news handling:-
- relay/switch processing
- storage for local use (and the reader supporting functions)
the switch functions were streaming articles out *as* they hit the
switch box - the articles never hit the disk.
Now there are reliability differences - if you lose a few (million)
usenet articles frankly no one cares - and its a flood fill algorithm so
you get them back along another route in many cases.
However if you really want fast low latency relaying on mail then you
are looking at firing up the onward connection to the destination as
soon as you have the envelope information and then just passing through
the message data without saving it at all. Your end of data response is
then an echo of the remote end-of-data (or actually a combination of the
potentially multiple responses for multi-recipient messages).
Making this work is definitely non-trivial. It is also so far from the
standard exim use case that it would be seriously counterproductive to
put it into standard exim - you are looking at a special case mailer.
[You also probably cannot have spam scanning or similar functions within
this sort of system]
Please let us know when you have designed and built this system.
Nigel.
--
[ Nigel Metheringham Nigel.Metheringham@??? ]
[ - Comments in this message are my own and not ITO opinion/policy - ]