Re: [exim] Deployment of Multiple Exim Servers for Scalabili…

Top Page
Delete this message
Reply to this message
Author: Todd Lyons
Date:  
CC: exim-users
Subject: Re: [exim] Deployment of Multiple Exim Servers for Scalability
On Thu, Jul 10, 2014 at 7:14 AM, John Traweek CCNA, Sec+
<johnt@???> wrote:
> I am running multiple servers in a forwarding only scenario. Basically
> my front end exim servers use a shared backend MySQL DB to perform user
> looks ups and obtain a forwarding destination. The balancing across
> multiple servers is simply done through a load balancing appliance in
> front of the exim servers. If your user base settings/config etc, can
> be stored in a shared DB there isn't any reason why you couldn't do it.
>
> I have not taken it to the next level in the fact that if I have to make
> an adjustment to exim or SA specific settings, I have to go to each
> server and make the config. I am not sure if that is even possible, but


Yes, it is possible to take it to that next level with Chef, Puppet, etc.

There are two parts your scaling approach describes:
1) How the traffic is balanced among multiple machines
2) How the user configuration data is centrally available (in a database)

...but IMHO there are at least four parts to scaling:
3) How configuration is managed across multiple machines
4) How service state is managed across multiple machines

Your comments describe what you did for #1 and #2, but doesn't address
#3 and #4. #3 and #4 together are part of what is currently being
referred to as devops.

You can use Chef or Puppet to do #3 for you. As an example using
Chef: you make a minor change to a template file (for exim or
spamassassin, anything that's running on your exim servers), commit
it, upload it to the chef server. Assuming you have your exim servers
running the chef client from cron every 15 or 30 minutes, the next
time chef client runs (on each exim server), it will reapply that
template, recreating that config file with the new contents, and then
restart the related service (you build your Chef cookbook to restart
the service when a template file changes).

Chef/Puppet can also do #4, but in my experience it frequently gets in
the way of manual troubleshooting or maintenance where you turn a
service off to do the troubleshooting. You have to design or document
a way of turning off the state management so that it doesn't come
along and reverse your changes. As an example using Chef: I turn off
exim on server4 because I want to troubleshoot a particular mail
delivery issue and I don't want the logs cluttered. Chef runs, sees
Exim is down, and restarts it, making my troubleshooting more
difficult due to the sheer volume of log lines.

I described Chef above, but Puppet is also a highly regarded
configuration and state management system, used in many very large
outfits.

You may ask, why go through all the trouble of using Chef to manage my
configs when I can just ssh and edit the files quickly?
1) Doesn't scale - That may work fine for 3 or 4 machines, but imagine
you have 20.
2) Disaster recovery - Something happens that all of your machines
die. To recover, spool up a base OS with hostname and IP for the
replacement machine(s), install chef-client (one line command with
curl and bash), run chef client against the chef server, and it
recreates everything, including starting up all the services (assuming
your cookbooks do that).

In the end, it's a lot like designing database schema. You spend a
lot of time designing and optimizing the schema, relationships, and
indexes up front in order to alleviate performance issues as your app
usage and database contents/volume grows. In the same way, you spend
a lot of time designing configuration file templates and service state
management controls up front in order to alleviate manual file changes
and service state management as your number of servers and their
traffic grows.

...Todd
--
The total budget at all receivers for solving senders' problems is $0.
If you want them to accept your mail and manage it the way you want,
send it the way the spec says to. --John Levine