[exim-cvs] cvs commit: exim/exim-doc ABOUT exim/exim-doc/do…

Página Inicial
Delete this message
Reply to this message
Autor: Philip Hazel
Data:  
Para: exim-cvs
Assunto: [exim-cvs] cvs commit: exim/exim-doc ABOUT exim/exim-doc/doc-docbook ABOUT AdMarkup.txt HowItWorks.txt Makefile MyAsciidoc.conf MyStyle-chunk-html.xsl MyStyle-filter-fo.xsl MyStyle-fo.xsl MyStyle-
ph10 2005/06/16 11:32:31 BST

  Modified files:
    exim-doc/doc-scripts ABOUT 
    exim-doc/doc-src     ABOUT 
  Added files:
    exim-doc             ABOUT 
    exim-doc/doc-docbook ABOUT AdMarkup.txt HowItWorks.txt 
                         Makefile MyAsciidoc.conf 
                         MyStyle-chunk-html.xsl 
                         MyStyle-filter-fo.xsl MyStyle-fo.xsl 
                         MyStyle-html.xsl MyStyle-nochunk-html.xsl 
                         MyStyle-spec-fo.xsl MyStyle-txt-html.xsl 
                         MyStyle.xsl MyTitlepage.templates.xml 
                         Myhtml.css Pre-xml TidyHTML-filter 
                         TidyHTML-spec Tidytxt filter.ascd 
                         spec.ascd x2man 
  Log:
  Install all the files that comprise the new DocBook way of making the
  documentation.


  Revision  Changes    Path
  1.1       +33 -0     exim/exim-doc/ABOUT (new)
  1.1       +16 -0     exim/exim-doc/doc-docbook/ABOUT (new)
  1.1       +434 -0    exim/exim-doc/doc-docbook/AdMarkup.txt (new)
  1.1       +541 -0    exim/exim-doc/doc-docbook/HowItWorks.txt (new)
  1.1       +177 -0    exim/exim-doc/doc-docbook/Makefile (new)
  1.1       +205 -0    exim/exim-doc/doc-docbook/MyAsciidoc.conf (new)
  1.1       +20 -0     exim/exim-doc/doc-docbook/MyStyle-chunk-html.xsl (new)
  1.1       +55 -0     exim/exim-doc/doc-docbook/MyStyle-filter-fo.xsl (new)
  1.1       +241 -0    exim/exim-doc/doc-docbook/MyStyle-fo.xsl (new)
  1.1       +171 -0    exim/exim-doc/doc-docbook/MyStyle-html.xsl (new)
  1.1       +11 -0     exim/exim-doc/doc-docbook/MyStyle-nochunk-html.xsl (new)
  1.1       +17 -0     exim/exim-doc/doc-docbook/MyStyle-spec-fo.xsl (new)
  1.1       +23 -0     exim/exim-doc/doc-docbook/MyStyle-txt-html.xsl (new)
  1.1       +202 -0    exim/exim-doc/doc-docbook/MyStyle.xsl (new)
  1.1       +101 -0    exim/exim-doc/doc-docbook/MyTitlepage.templates.xml (new)
  1.1       +31 -0     exim/exim-doc/doc-docbook/Myhtml.css (new)
  1.1       +173 -0    exim/exim-doc/doc-docbook/Pre-xml (new)
  1.1       +79 -0     exim/exim-doc/doc-docbook/TidyHTML-filter (new)
  1.1       +139 -0    exim/exim-doc/doc-docbook/TidyHTML-spec (new)
  1.1       +21 -0     exim/exim-doc/doc-docbook/Tidytxt (new)
  1.1       +1759 -0   exim/exim-doc/doc-docbook/filter.ascd (new)
  1.1       +33111 -0  exim/exim-doc/doc-docbook/spec.ascd (new)
  1.1       +212 -0    exim/exim-doc/doc-docbook/x2man (new)
  1.2       +5 -2      exim/exim-doc/doc-scripts/ABOUT
  1.2       +7 -4      exim/exim-doc/doc-src/ABOUT


Index: ABOUT
====================================================================
$Cambridge: exim/exim-doc/ABOUT,v 1.1 2005/06/16 10:32:31 ph10 Exp $

CVS directory exim/exim-doc
---------------------------

This directory contains all the files related to Exim documentation. They are
held in a number of subdirectories.

  doc-docbook      This directory contains the AsciiDoc and DocBook sources for
                   the Exim specification and the filter description. It also
                   contains a Makefile and all the scripts, stylesheets, etc.
                   that are used to create the distributed renditions of the
                   documents. This way of creating the documentation was
                   introduced for release 4.60.


  doc-misc         This directory contains a number of miscellaneous documents
                   that are relevant to Exim, but not part of its distribution
                   tarball.


  doc-scripts      This directory contains scripts for building exported
                   documentation from the original SGCAL input source. These were
                   used up to and including release 4.50.


  doc-src          This directory contains the SGCAL source documents that were
                   used up to and including release 4.50.


  doc-txt          This directory contains documentation that is maintained only
                   as text files.


Each of these directories contains an ABOUT file that describes its contents in
more detail.

End

Index: ABOUT
====================================================================
$Cambridge: exim/exim-doc/doc-docbook/ABOUT,v 1.1 2005/06/16 10:32:31 ph10 Exp $

CVS directory exim/exim-doc/doc-docbook
---------------------------------------

This directory contains the AsciiDoc and DocBook sources for the Exim
specification and the filter description. It also contains a Makefile and all
the scripts, stylesheets, etc. that are used to create the distributed
renditions of the documents. This way of creating the documentation was
introduced for release 4.60.

The file HowItWorks.txt explains the processes by which the distributed
renditions are created. It also contains a list of the files in this directory
and what they all contain.

End

Index: AdMarkup.txt
====================================================================
$Cambridge: exim/exim-doc/doc-docbook/AdMarkup.txt,v 1.1 2005/06/16 10:32:31 ph10 Exp $

Asciidoc markup used in the Exim documentation
----------------------------------------------

This file contains a summary of the AsciiDoc markup that is used in the source
files of the Exim documentation. The source files are in plain text that can be
edited by any text editor. They are converted by the AsciiDoc application into
DocBook XML for subsequent processing into the various output formats.

This markup requires AsciiDoc release 6.0.3 or later.

The advantage of using AsciiDoc format as a "back end" is that is uses
relatively simple markup in the majority of the text, making it easier to read
and edit. The disadvantage is that it is tricky to deal with complicated
formatting - though that is probably true of any markup language - and there
are a few gotchas.

The Exim documentation uses the default AsciiDoc markup with some additions. I
have created a special AsciiDoc configuration file for use with the Exim
documentation. You must use this configuration if you want to get sensible
results,


SPECIAL CHARACTERS

When typing paragraphs of text, the following character sequences are
recognized as markup if they occur surrounding a "word phrase" within a
paragraph. In the list below, ... represents the text that is enclosed.

    '...'    single quotes        italic:
                                    used for email addresses, domains, local
                                    parts, header names, user names


    *...*    asterisks            bold
                                    used for things like "*Note:*"


    `...`    backticks            monospaced text
                                    used for literal quoting


    $...$    dollar               Exim variable
                                    maps to XML <varname> with leading $


    %...%    percent              Exim option, command line option
                                    maps to XML <option>


    ^...^    circumflex           Exim driver name, Unix command, filter command
                                   maps to XML <command>


    ^^...^^  double circumflex    C function: maps to XML <function>


    ^%...%^  circumflex percent   parameter: maps to XML <parameter>
                                    Not currently used


    _..._    underscore           file name: maps to XML <filename>


    ``...''  backticks & quotes   put word in quotation marks


For example,

    This is an 'italic phrase'. This is a _filename_ and a $variable$.
    This ``word'' is in quote marks.


These quoting characters are recognized only if they are not flanked by
alphanumeric characters. Thus, for instance, an apostrophe within a word can be
represented as a single quote without any problem. Quoting can be nested, but
not overlapped. However, the resulting XML from nested quotes is not always
valid, so nesting is best avoided. (For example, `xxx'yyy'xxx` generates an
<emphasis> item within a <literal> item, and the DocBook DTD doesn't allow
that.) However, one combination that does work is <literal> within an
<emphasis>, so that is what you have to use if you want a boldface monospaced
font. That is, use *`bold mono`* and not `*bold mono*`. Sigh.

There are also some character sequences that are translated into non-Ascii
characters:

    --     en-dash    (&#x2013;)
    ---    em-dash    (&#x8212;)
    ~      hard space (&#x00a0;)
    !!     dagger     (&#x2020;}


The two-character sequence ## is turned into nothing. It is useful for
disambiguating markup. For example, something like

    ``quoted ending in 'emphasized'''


is ambiguous, and as AsciiDoc looks for the longest markup first, it doesn't do
what you want. You have to code this as

    ``quoted ending in 'emphasized'##''


The dashes are recognized only when surrounded by white space. The special Exim
AsciiDoc configuration also translates most apostrophes to a typographic
apostrophe (&#x2019;). There are some cases where this doesn't work, for
example, an apostrophe after a word in another font (because the quote
character gets in the way). For this purpose, there is a named "attribute" that
can be used. Named attributes are substituted inside curly braces.

For example, in the filter document there is a reference to an imaginary user
called lg303. User names are italicized, so this is always typed as 'lg303' but
if an apostrophe-s is needed after it, you have to type

    'lg303'{ap}s


Another named attribute is {tl}, which turns into a tilde character, because a
literal tilde becomes a hard space.

A third named attribute is {hh}, which turns into two hyphens, because a
literal "--" is converted into an en dash.

A fourth named attributs is {pc}, which turns into a percent sign.


ESCAPING SPECIAL CHARACTERS

Use backslash if you need to escape a special character.

***** GOTCHA *****
Backslash is not special when it precedes any other character. Thus, you need
to know which characters are special, which is a pain.


COMMENTS

You can include comments that will not be copied to the XML document by
creating a comment block that is delimited by at least three slashes. For
example:

    ///
    This is an AsciiDoc comment block.
    ///



URL REFERENCES

To refer to a URL, just put it in the text, followed by some text in square
brackets to define the displayed text. If that is empty, the URL itself is
displayed. For example, here's a reference to http://www.exim.org/[exim home
page]. In HTML output, all you see is the display text; in printed output you
see something like "exim home page [http://www.exim.org/]". The URL is printed
in whatever is the current font, so it can be made bold by putting it in
asterisks (for example).


FORMAL PARAGRAPHS

A formal paragraph has a title. This is normally typeset in bold at the start
of the paragraph, and is useful as an alternative to a vertical labelled list
(see below). To create such a paragraph, you just put its title first, like
this:

    [title="the title"]
    Now give the text of the paragraph as usual.



CHAPTERS AND SECTIONS

AsciiDoc recognizes chapter and sections by looking for underlined lines, with
the underlining character used to determine the type of section.

    This is a chapter title
    -----------------------


    This is a section title
    ~~~~~~~~~~~~~~~~~~~~~~~


Chapter titles are used for running feet in the PDF form of the manual.
Sometimes they are too long, causing them to be split in an ugly way. The
solution to this is to define a short title for the chapter, like this:

    [titleabbrev="short title"]
    This is a rather long chapter title
    -----------------------------------



DISPLAYS

Displayed blocks in a monospaced font can just be indented:

    # Exim filter
    deliver baggins@???


However, it seems that if the first line in such a block starts with an
asterisk or if any lines in the block end in a backslash (as is quite often the
case in Exim configuration examples), you have to use a "listing block" or a
"literal block" instead of a "literal paragraph". Otherwise an initial asterisk
makes AsciiDoc think this is a list item, and a terminating backslash causes
lines to be concatenated.

Another time when you have to use an explicit block is when a display forms
part of a list item. This is because you have to indent such displays more than
usual, because the processors don't appear to do this automatically.

Listing blocks are delimited by lines of at least three hyphens; literal blocks
are delimited by lines of at least four dots. For example:

  ....
  /usr/sbin/sendmail -bf myfilter \
     -f islington@??? <test-message
  ....


Such blocks are indented by an amount that is specified in the style sheet, but
this amount is always the same, regardless of whether the block is inside a
list item (which is itself indented) or not. So if the block is within a list
item, it must be explicitly indented as well.

Blocks that are between lines of ampersands (at least 3 in each line) are
displayed (by default) in the normal font, but with the lines unchanged. Quotes
can be used in the block to specify different fonts. For example:

&&&&
`\n` is replaced by a newline
`\r` is replaced by a carriage return
`\t` is replaced by a tab
&&&&

When this kind of output is required within a list of any kind (see below), you
must precede it with a line consisting of just a plus sign, because by default
any kind of block terminates the list item.


CROSS-REFERENCES

To set a cross-reference point, enclose the name in double square brackets:

    [[SECTexample]]


To refer to a cross-reference point, enclose the name in double angle brackets:

    <<SECTexample>>



INDEX ENTRIES

To create an index entry, include a line like one of these:

    cindex:[primary text,secondary text]
    oindex:[primary text,secondary text]


at the appropriate point in the text. The first is for the "concept index" and
the second is for the "options index". Not all forms of output distinguish
between these - sometimes there is just one index.

The index for the Exim reference manual has a number of "see also" entries.
Rather than invent some fancy AsciiDoc way of doing this, I have just coded
them in XML, using the AsciiDoc escape hatch that is described below under
FUDGES.


LISTS

For a bulleted list, start each item in the list with a hyphen or an asterisk
followed by a space:

    - First item.
    - Second item.


For a numbered list, start each item with a dot followed by a space:

    . First item.
    . Second item.



VERTICAL LABELLED LISTS

These are used for Exim command line options and similar things. They map into
XML <variablelist> items. Start the list with the item name, followed by two
colons, on a line by itself. This is followed by the text for the list item.


LISTS CONTAINING MORE THAN ONE PARAGRAPH

If there is more than one paragraph in a list item, the second and subsequent
ones must be preceded by a line containing just a single "+" character, as
otherwise the list is terminated. Literal paragraphs can be included without
any special markup. For example:

    first item::
    This is the pararaph that describes the item.


      We can even have an indented display
      within the item
    +
    but any more paragraphs must be preceded by a plus character
    (otherwise they aren't included in the list, and won't be
    properly indented).


The "+" notation can also be used to include other kinds of block within a list
item. It's needed for all block types except nested lists and literal
paragraphs.

An alternative approach to lists that contain multiple paragraphs or blocks
within each item is to put a line containing just two hyphens immediately
before and immediately after the list. For example:

    --
    . First item


    Second paragraph of first item


    . Second item


    And so on
    --


This is particularly helpful for nested lists (see below).


NESTED LISTS

You can nest lists of different types. However, if you want to revert to an
outer list item at the end of a nested list, you must use the "--" feature
described above for the inner list, so that its end can be explicitly marked.
For example:

    . Outer list
    +
    Second paragraph in outer list
    +
    --
    - Inner list item
    - Inner list second item
    --
    +
    Another paragraph in the outer list first item


    . Next item in the outer list



TABLES

A fixed-width table is started by a line of hyphens that determines the width
of the table, interspersed with the following column stop characters:

    ` backtick   align left
    ' quote      align right
    . dot        align centre


The data is then aligned with the stop characters. For example:

    `---`---
    1   2
    3   4
    --------


Alternatively, if tildes are used instead of hyphens, the data fields are
comma-separated. Columns can also be specified numerically instead of by
pattern. This is usually used with CSV data. For example:

    `10`20`30~
    one, two, three
    ~~~~~


This format is useful when the data is full of markup so that its final length
bears little relationship to the input (for example, when there are cross
references).

By default, tables will be rendered with a frame at the top and bottom, and no
separators between rows and columns. You can use AsciiDoc "attributes" to
change this. Attributes are set by a sequence of name=value items in square
brackets, before the thing to which they apply. For example:

    [frame="none"]
    `-----`-----
    11    22
    33    44
    ------------


The values for "frame" are "topbot", "sides", "all", or "none". There is also a
"grid" attribute, whose possible values are "none", "cols", "rows", or "all".
For example:

    [frame="sides", grid="cols"]


The commas between the attribute settings are important; if they are omitted,
AsciiDoc ignores the attribute settings.


EXIM CONFIGURATION OPTION HEADINGS

Each Exim configuration option is formatted with its name, usage, type, and
default value on a single line, spread over the line so as to fill it
completely. The only way I know of aligning text using DocBook is to use a
table. A special table format has been created to handle this special case. For
example:

    `..'=
    %keep_malformed%, Use: 'main', Type: 'boolean', Default: 'false'
    ===


The first line defines four colums using stop characters, followed by an equals
character that defines the table's "ruler" character. There is no need to
define column widths, because the style forces the columns to fill the page
width. The data is comma-separated.


CHANGE BARS

I haven't yet found a way of doing changebars in the printed versions. However,
it is possible to put a green background behind changed text in the HTML
version, so the appropriate markup should be used. Before a changed paragraph,
insert

    [revisionflag="changed"]


This should precede any index settings at the start of the paragraph. If you
want to do this for a display, you must use the "&&&" block described above,
because that's the only type that I have set up to support it.


FUDGES

The current release of "fop", a program for producing PostScript from
"formatting objects" (fo) data, which is an intermediate output that can be
generated from DocBook XML, is not very good at page layout. For example, it
can place a section heading as the last line on a page. I have set up a style
that provides a means of forcing a page break in order to get round this. (But
in practice, it happens so often that I have given up trying to use it.)

At the AsciiDoc level, the markup uses a "backend block", which provides a way
of specifying DocBook output directly. Backend blocks are surrounded by lines
of plusses, and this particular fudge looks like this:

    ++++++++++++
    <?hard-pagebreak?>
    ++++++++++++


Backend blocks are used to insert XML comments into the output, to mark the
start and end of Exim's command line options. These are used by the x2man
script that creates the man page.


Philip Hazel
Last updated: 10 June 2005

Index: HowItWorks.txt
====================================================================
$Cambridge: exim/exim-doc/doc-docbook/HowItWorks.txt,v 1.1 2005/06/16 10:32:31 ph10 Exp $

CREATING THE EXIM DOCUMENTATION

"You are lost in a maze of twisty little scripts."


This document describes how the various versions of the Exim documentation, in
different output formats, are created from DocBook XML, and also how the
DocBook XML is itself created.


BACKGROUND: THE OLD WAY

From the start of Exim, in 1995, the specification was written in a local text
formatting system known as SGCAL. This is capable of producing PostScript and
plain text output from the same source file. Later, when the "ps2pdf" command
became available with GhostScript, that was used to create a PDF version from
the PostScript. (A few earlier versions were created by a helpful user who had
bought the Adobe distiller software.)

A demand for a version in "info" format led me to write a Perl script that
converted the SGCAL input into a Texinfo file. Because of the somewhat
restrictive requirements of Texinfo, this script has always needed a lot of
maintenance, and has never been 100% satisfactory.

The HTML version of the documentation was originally produced from the Texinfo
version, but later I wrote another Perl script that produced it directly from
the SGCAL input, which made it possible to produce better HTML.

There were a small number of diagrams in the documentation. For the PostScript
and PDF versions, these were created using Aspic, a local text-driven drawing
program that interfaces directly to SGCAL. For the text and texinfo versions,
alternative ascii-art diagrams were used. For the HTML version, screen shots of
the PostScript output were turned into gifs.


A MORE STANDARD APPROACH

Although in principle SGCAL and Aspic could be generally released, they would
be unlikely to receive much (if any) maintenance, especially after I retire.
Furthermore, the old production method was only semi-automatic; I still did a
certain amount of hand tweaking of spec.txt, for example. As the maintenance of
Exim itself was being opened up to a larger group of people, it seemed sensible
to move to a more standard way of producing the documentation, preferable fully
automated. However, we wanted to use only non-commercial software to do this.

At the time I was thinking about converting (early 2005), the "obvious"
standard format in which to keep the documentation was DocBook XML. The use of
XML in general, in many different applications, was increasing rapidly, and it
seemed likely to remain a standard for some time to come. DocBook offered a
particular form of XML suited to documents that were effectively "books".

Maintaining an XML document by hand editing is a tedious, verbose, and
error-prone process. A number of specialized XML text editors were available,
but all the free ones were at a very primitive stage. I therefore decided to
keep the master source in AsciiDoc format (described below), from which a
secondary XML master could be automatically generated.

All the output formats are generated from the XML file. If, in the future, a
better way of maintaining the XML source becomes available, this can be adopted
without changing any of the processing that produces the output documents.
Equally, if better ways of processing the XML become available, they can be
adopted without affecting the source maintenance.

A number of issues arose while setting this all up, which are best summed up by
the statement that a lot of the technology is (in 2005) still very immature. It
is probable that trying to do this conversion any earlier would not have been
anywhere near as successful. The main problems that still bother me are
described in the penultimate section of this document.

The following sections describe the processes by which the AsciiDoc files are
transformed into the final output documents. In practice, the details are coded
into a makefile that specifies the chain of commands for each output format.


REQUIRED SOFTWARE

Installing software to process XML puts lots and lots of stuff on your box. I
run Gentoo Linux, and a lot of things have been installed as dependencies that
I am not fully aware of. This is what I know about (version numbers are current
at the time of writing):

. AsciiDoc 6.0.3

    This converts the master source file into a DocBook XML file, using a
    customized AsciiDoc configuration file.


. xmlto 0.0.18

    This is a shell script that drives various XML processors. It is used to
    produce "formatted objects" for PostScript and PDF output, and to produce
    HTML output. It uses xsltproc, libxml, libxslt, libexslt, and possibly other
    things that I have not figured out, to apply the DocBook XSLT stylesheets.


  . libxml 1.8.17
    libxml2 2.6.17
    libxslt 1.1.12


    These are all installed on my box; I do not know which of libxml or libxml2
    the various scripts are actually using.


. xsl-stylesheets-1.66.1

    These are the standard DocBook XSL stylesheets.


. fop 0.20.5

    FOP is a processor for "formatted objects". It is written in Java. The fop
    command is a shell script that drives it.


. w3m 0.5.1

    This is a text-oriented web brower. It is used to produce the Ascii form of
    the Exim documentation from a specially-created HTML format. It seems to do a
    better job than lynx.


. docbook2texi (part of docbook2X 0.8.5)

    This is a wrapper script for a two-stage conversion process from DocBook to a
    Texinfo file. It uses db2x_xsltproc and db2x_texixml. Unfortunately, there
    are two versions of this command; the old one is based on an earlier fork of
    docbook2X and does not work.


. db2x_xsltproc and db2x_texixml (part of docbook2X 0.8.5)

    More wrapping scripts (see previous item).


. makeinfo 4.8

    This is used to make a set of "info" files from a Texinfo file.


In addition, there are some locally written Perl scripts. These are described
below.


ASCIIDOC

AsciiDoc (http://www.methods.co.nz/asciidoc/) is a Python script that converts
an input document in a more-or-less human-readable format into DocBook XML.
For a document as complex as the Exim specification, the markup is quite
complex - probably no simpler than the original SGCAL markup - but it is
definitely easier to work with than XML itself.

AsciiDoc is highly configurable. It comes with a default configuration, but I
have extended this with an additional configuration file that must be used when
processing the Exim documents. There is a separate document called AdMarkup.txt
that describes the markup that is used in these documents. This includes the
default AsciiDoc markup and the local additions.

The author of AsciiDoc uses the extension .txt for input documents. I find
this confusing, especially as some of the output files have .txt extensions.
Therefore, I have used the extension .ascd for the sources.


THE MAKEFILE

The makefile supports a number of targets of the form x.y, where x is one of
"filter", "spec", or "test", and y is one of "xml", "fo", "ps", "pdf", "html",
"txt", or "info". The intermediate targets "x.xml" and "x.fo" are provided for
testing purposes. The other five targets are production targets. For example:

    make spec.pdf


This runs the necessary tools in order to create the file spec.pdf from the
original source spec.ascd. A number of intermediate files are created during
this process, including the master DocBook source, called spec.xml. Of course,
the usual features of "make" ensure that if this already exists and is
up-to-date, it is not needlessly rebuilt.

The "test" series of targets were created so that small tests could easily be
run fairly quickly, because processing even the shortish filter document takes
a bit of time, and processing the main specification takes ages.

Another target is "exim.8". This runs a locally written Perl script called
x2man, which extracts the list of command line options from the spec.xml file,
and creates a man page. There are some XML comments in the spec.xml file to
enable the script to find the start and end of the options list.

There is also a "clean" target that deletes all the generated files.


CREATING DOCBOOK XML FROM ASCIIDOC

There is a single local AsciiDoc configuration file called MyAsciidoc.conf.
Using this, one run of the asciidoc command creates a .xml file from a .ascd
file. When this succeeds, there is no output.


DOCBOOK PROCESSING

Processing a .xml file into the five different output formats is not entirely
straightforward. For a start, the same XML is not suitable for all the
different output styles. When the final output is in a text format (.txt,
.texinfo) for instance, all non-Ascii characters in the input must be converted
to Ascii transliterations because the current processing tools do not do this
correctly automatically.

In order to cope with these issues in a flexible way, a Perl script called
Pre-xml was written. This is used to preprocess the .xml files before they are
handed to the main processors. Adding one more tool onto the front of the
processing chain does at least seem to be in the spirit of XML processing.

The XML processors themselves make use of style files, which can be overridden
by local versions. There is one that applies to all styles, called MyStyle.xsl,
and others for the different output formats. I have included comments in these
style files to explain what changes I have made. Some of the changes are quite
significant.


THE PRE-XML SCRIPT

The Pre-xml script copies a .xml file, making certain changes according to the
options it is given. The currently available options are as follows:

-abstract

    This option causes the <abstract> element to be removed from the XML. The
    source abuses the <abstract> element by using it to contain the author's
    address so that it appears on the title page verso in the printed renditions.
    This just gets in the way for the non-PostScript/PDF renditions.


-ascii

    This option is used for Ascii output formats. It makes the following
    character replacements:


      &8230;    =>  ...       (sic, no #x)
      &#x2019;  =>  '         apostrophe
      &#x201C;  =>  "         opening double quote
      &#x201D;  =>  "         closing double quote
      &#x2013;  =>  -         en dash
      &#x2020;  =>  *         dagger
      &#x2021;  =>  **        double dagger
      &#x00a0;  =>  a space   hard space
      &#x00a9;  =>  (c)       copyright


    In addition, this option causes quotes to be put round <literal> text items,
    and <quote> and </quote> to be replaced by Ascii quote marks. You would think
    the stylesheet would cope with the latter, but it seems to generate non-Ascii
    characters that w3m then turns into question marks.


-bookinfo

    This option causes the <bookinfo> element to be removed from the XML. It is
    used for the PostScript/PDF forms of the filter document, in order to avoid
    the generation of a full title page.


-fi

    Replace any occurrence of "fi" by the ligature &#xFB01; except when it is
    inside an XML element, or inside a <literal> part of the text.


    The use of ligatures would be nice for the PostScript and PDF formats. Sadly,
    it turns out that fop cannot at present handle the FB01 character correctly.
    The only format that does so is the HTML format, but when I used this in the
    test version, people complained that it made searching for words difficult.
    So at the moment, this option is not used. :-(


-noindex

    Remove the XML to generate a Concept Index and an Options index.


-oneindex

    Remove the XML to generate a Concept and an Options Index, and add XML to
    generate a single index.


The source document has two types of index entry, for a concept and an options
index. However, no index is required for the .txt and .texinfo outputs.
Furthermore, the only output processor that supports multiple indexes is the
processor that produces "formatted objects" for PostScript and PDF output. The
HTML processor ignores the XML settings for multiple indexes and just makes one
unified index. Specifying two indexes gets you two copies of the same index, so
this has to be changed.


CREATING POSTSCRIPT AND PDF

These two output formats are created in three stages. First, the XML is
pre-processed. For the filter document, the <bookinfo> element is removed so
that no title page is generated, but for the main specification, no changes are
currently made.

Second, the xmlto command is used to produce a "formatted objects" (.fo) file.
This process uses the following stylesheets:

    (1) Either MyStyle-filter-fo.xsl or MyStyle-spec-fo.xsl
    (2) MyStyle-fo.xsl
    (3) MyStyle.xsl
    (4) MyTitleStyle.xsl


The last of these is not used for the filter document, which does not have a
title page. The first three stylesheets were created manually, either by typing
directly, or by coping from the standard style sheet and editing.

The final stylesheet has to be created from a template document, which is
called MyTitlepage.templates.xml. This was copied from the standard styles and
modified. The template is processed with xsltproc to produce the stylesheet.
All this apparatus is appallingly heavyweight. The processing is also very slow
in the case of the specification document. However, there should be no errors.

In the third and final part of the processing, the .fo file that is produced by
the xmlto command is processed by the fop command to generate either PostScript
or PDF. This is also very slow, and you get a whole slew of errors, of which
these are a sample:

    [ERROR] property - "background-position-horizontal" is not implemented yet.


    [ERROR] property - "background-position-vertical" is not implemented yet.


    [INFO] JAI support was not installed (read: not present at build time).
      Trying to use Jimi instead
      Error creating background image: Error creating FopImage object (Error
      creating FopImage object
      (http://docbook.sourceforge.net/release/images/draft.png) :
      org.apache.fop.image.JimiImage


    [WARNING] table-layout=auto is not supported, using fixed!


    [ERROR] Unknown enumerated value for property 'span': inherit


    [ERROR] Error in span property value 'inherit':
      org.apache.fop.fo.expr.PropertyException: No conversion defined


    [ERROR] Areas pending, text probably lost in lineinclude parts matched in the
      response by response_pattern by means of numeric variables such as


The last one is particularly meaningless gobbledegook. Some of the errors and
warnings are repeated many times. Nevertheless, it does eventually produce
usable output, though I have a number of issues with it (see a later section of
this document). Maybe one day there will be a new release of fop that does
better. Maybe there will be some other means of producing PostScript and PDF
from DocBook XML. Maybe porcine aeronautics will really happen.


CREATING HTML

Only two stages are needed to produce HTML, but the main specification is
subsequently postprocessed. The Pre-xml script is called with the -abstract and
-oneindex options to preprocess the XML. Then the xmlto command creates the
HTML output directly. For the specification document, a directory of files is
created, whereas the filter document is output as a single HTML page. The
following stylesheets are used:

    (1) Either MyStyle-chunk-html.xsl or MyStyle-nochunk-html.xsl
    (2) MyStyle-html.xsl
    (3) MyStyle.xsl


The first stylesheet references the chunking or non-chunking standard
stylesheet, as appropriate.

The original HTML that I produced from the SGCAL input had hyperlinks back from
chapter and section titles to the table of contents. These links are not
generated by xmlto. One of the testers pointed out that the lack of these
links, or simple self-referencing links for titles, makes it harder to copy a
link name into, for example, a mailing list response.

I could not find where to fiddle with the stylesheets to make such a change, if
indeed the stylesheets are capable of it. Instead, I wrote a Perl script called
TidyHTML-spec to do the job for the specification document. It updates the
index.html file (which contains the the table of contents) setting up anchors,
and then updates all the chapter files to insert appropriate links.

The index.html file as built by xmlto contains the whole table of contents in a
single line, which makes is hard to debug by hand. Since I was postprocessing
it anyway, I arranged to insert newlines after every '>' character.

The TidyHTML-spec script also takes the opportunity to postprocess the
spec.html/ix01.html file, which contains the document index. Again, the index
is generated as one single line, so it splits it up. Then it creates a list of
letters at the top of the index and hyperlinks them both ways from the
different letter portions of the index.

People wanted similar postprocessing for the filter.html file, so that is now
done using a similar script called TidyHTML-filter. It was easier to use a
separate script because filter.html is a single file rather than a directory,
so the logic is somewhat different.


CREATING TEXT FILES

This happens in four stages. The Pre-xml script is called with the -abstract,
-ascii and -noindex options to remove the <abstract> element, convert the input
to Ascii characters, and to disable the production of an index. Then the xmlto
command converts the XML to a single HTML document, using these stylesheets:

    (1) MyStyle-txt-html.xsl
    (2) MyStyle-html.xsl
    (3) MyStyle.xsl


The MyStyle-txt-html.xsl stylesheet is the same as MyStyle-nochunk-html.xsl,
except that it contains an addition item to ensure that a generated "copyright"
symbol is output as "(c)" rather than the Unicode character. This is necessary
because the stylesheet itself generates a copyright symbol as part of the
document title; the character is not in the original input.

The w3m command is used with the -dump option to turn the HTML file into Ascii
text, but this contains multiple sequences of blank lines that make it look
awkward, so, finally, a local Perl script called Tidytxt is used to convert
sequences of blank lines into a single blank line.


CREATING INFO FILES

This process starts with the same Pre-xml call as for text files. The
<abstract> element is deleted, non-ascii characters in the source are
transliterated, and the <index> elements are removed. The docbook2texi script
is then called to convert the XML file into a Texinfo file. However, this is
not quite enough. The converted file ends up with "conceptindex" and
"optionindex" items, which are not recognized by the makeinfo command. An
in-line call to Perl in the Makefile changes these to "cindex" and "findex"
respectively in the final .texinfo file. Finally, a call of makeinfo creates a
set of .info files.

There is one apparently unconfigurable feature of docbook2texi: it does not
seem possible to give it a file name for its output. It chooses a name based on
the title of the document. Thus, the main specification ends up in a file
called the_exim_mta.texi and the filter document in exim_filtering.texi. These
files are removed after their contents have been copied and modified by the
inline Perl call, which makes a .texinfo file.


CREATING THE MAN PAGE

I wrote a Perl script called x2man to create the exim.8 man page from the
DocBook XML source. I deliberately did NOT start from the AsciiDoc source,
because it is the DocBook source that is the "standard". This comment line in
the DocBook source marks the start of the command line options:

    <!-- === Start of command line options === -->


A similar line marks the end. If at some time in the future another way other
than AsciiDoc is used to maintain the DocBook source, it needs to be capable of
maintaining these comments.


UNRESOLVED PROBLEMS

There are a number of unresolved problems with producing the Exim documentation
in the manner described above. I will describe them here in the hope that in
future some way round them can be found.

(1) Errors in the toolchain

       When a whole chain of tools is processing a file, an error somewhere in
       the middle is often very hard to debug. For instance, an error in the
       AsciiDoc might not show up until an XML processor throws a wobbly because
       the generated XML is bad. You have to be able to read XML and figure out
       what generated what. One of the reasons for creating the "test" series of
       targets was to help in checking out these kinds of problem.


  (2)  There is a mechanism in XML for marking parts of the document as
       "revised", and I have arranged for AsciiDoc markup to use it. However, at
       the moment, the only output format that pays attention to this is the HTML
       output, which sets a green background. There are therefore no revision
       marks (change bars) in the PostScript, PDF, or text output formats as
       there used to be. (There never were for Texinfo.)


  (3)  The index entries in the HTML format take you to the top of the section
       that is referenced, instead of to the point in the section where the index
       marker was set.


  (4)  The HTML output supports only a single index, so the concept and options
       index entries have to be merged.


  (5)  The index for the PostScript/PDF output does not merge identical page
       numbers, which makes some entries look ugly.


  (6)  None of the indexes (PostScript/PDF and HTML) make use of textual
       markup; the text is all roman, without any italic or boldface.


  (7)  I turned off hyphenation in the PostScript/PDF output, because it was
       being done so badly.


       (a) It seems to force hyphenation if it is at all possible, without
           regard to the "tightness" or "looseness" of the line. Decent
           formatting software should attempt hyphenation only if the line is
           over some "looseness" threshold; otherwise you get far too many
           hyphenations, often for several lines in succession.


       (b) It uses an algorithmic form of hyphenation that doesn't always produce
           acceptable word breaks. (I prefer to use a hyphenation dictionary.)


(8) The PostScript/PDF output is badly paginated:

       (a) There seems to be no attempt to avoid "widow" and "orphan" lines on
           pages. A "widow" is the last line of a paragraph at the top of a page,
           and an "orphan" is the first line of a paragraph at the bottom of a
           page.


       (b) There seems to be no attempt to prevent section headings being placed
           last on a page, with no following text on the page.


  (9)  The fop processor does not support "fi" ligatures, not even if you put the
       appropriate Unicode character into the source by hand.


  (10) There are no diagrams in the new documentation. This is something I could
       work on. The previously-used Aspic command for creating line art from a
       textual description can output Encapsulated PostScript or Scalar Vector
       Graphics, which are two standard diagram representations. Aspic could be
       formally released and used to generate output that could be included in at
       least some of the output formats.


The consequence of (7), (8), and (9) is that the PostScript/PDF output looks as
if it comes from some of the very early attempts at text formatting of around
20 years ago. We can only hope that 20 years' progress is not going to get
lost, and that things will improve in this area.


LIST OF FILES

  AdMarkup.txt                   Describes the AsciiDoc markup that is used
  HowItWorks.txt                 This document
  Makefile                       The makefile
  MyAsciidoc.conf                Localized AsciiDoc configuration
  MyStyle-chunk-html.xsl         Stylesheet for chunked HTML output
  MyStyle-filter-fo.xsl          Stylesheet for filter fo output
  MyStyle-fo.xsl                 Stylesheet for any fo output
  MyStyle-html.xsl               Stylesheet for any HTML output
  MyStyle-nochunk-html.xsl       Stylesheet for non-chunked HTML output
  MyStyle-spec-fo.xsl            Stylesheet for spec fo output
  MyStyle-txt-html.xsl           Stylesheet for HTML=>text output
  MyStyle.xsl                    Stylesheet for all output
  MyTitleStyle.xsl               Stylesheet for spec title page
  MyTitlepage.templates.xml      Template for creating MyTitleStyle.xsl
  Myhtml.css                     Experimental css stylesheet for HTML output
  Pre-xml                        Script to preprocess XML
  TidyHTML-filter                Script to tidy up the filter HTML output
  TidyHTML-spec                  Script to tidy up the spec HTML output
  Tidytxt                        Script to compact multiple blank lines
  filter.ascd                    AsciiDoc source of the filter document
  spec.ascd                      AsciiDoc source of the specification document
  x2man                          Script to make the Exim man page from the XML


The file Myhtml.css was an experiment that was not followed through. It is
mentioned in a comment in MyStyle-html.xsl, but is not at present in use.


Philip Hazel
Last updated: 10 June 2005

Index: Makefile
====================================================================
# $Cambridge: exim/exim-doc/doc-docbook/Makefile,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Make file for Exim documentation from Asciidoc source.

  notarget:;    @echo "** You must specify a target, in the form x.y, where x is 'filter', 'spec',"
            @echo "** or 'test', and y is 'xml', 'fo', 'ps', 'pdf', 'html', 'txt', or 'info'."
            @echo "** One other possible target is 'exim.8'".
            exit 1



############################## MAN PAGE ################################

  exim.8: spec.xml
            ./x2man


########################################################################


############################### FILTER #################################

  filter.xml:   filter.ascd MyAsciidoc.conf
            asciidoc -d book -b docbook -f MyAsciidoc.conf filter.ascd


  filter-fo.xml: filter.xml Pre-xml
            Pre-xml -bookinfo <filter.xml >filter-fo.xml


  filter-html.xml: filter.xml Pre-xml
            Pre-xml <filter.xml >filter-html.xml


  filter-txt.xml: filter.xml Pre-xml
            Pre-xml -ascii <filter.xml >filter-txt.xml


  filter.fo:    filter-fo.xml MyStyle-filter-fo.xsl MyStyle-fo.xsl MyStyle.xsl
            /bin/rm -rf filter.fo filter-fo.fo
            xmlto -x MyStyle-filter-fo.xsl fo filter-fo.xml
            /bin/mv -f filter-fo.fo filter.fo


  filter.ps:    filter.fo
            fop filter.fo -ps filter.ps


  filter.pdf:   filter.fo
            fop filter.fo -pdf filter.pdf


  filter.html:  filter-html.xml TidyHTML-filter MyStyle-nochunk-html.xsl MyStyle-html.xsl MyStyle.xsl
            /bin/rm -rf filter.html filter-html.html
            xmlto -x MyStyle-nochunk-html.xsl html-nochunks filter-html.xml
            /bin/mv -f filter-html.html filter.html
             ./TidyHTML-filter


  filter.txt:   filter-txt.xml Tidytxt MyStyle-txt-html.xsl MyStyle-html.xsl MyStyle.xsl
            /bin/rm -rf filter-txt.html
            xmlto -x MyStyle-txt-html.xsl html-nochunks filter-txt.xml
            w3m -dump filter-txt.html >filter.txt


# I have not found a way of making docbook2texi write its output anywhere
# other than the file name that it makes up. The --to-stdout option does not
# work.

  filter.info:  filter-txt.xml
            docbook2texi filter-txt.xml
            perl -ne 's/conceptindex/cindex/;s/optionindex/findex/;print;' \
          <exim_filtering.texi | Tidytxt >filter.texinfo
            /bin/rm -rf exim_filtering.texi
            makeinfo -o filter.info filter.texinfo


########################################################################


################################ SPEC ##################################

  spec.xml:     spec.ascd MyAsciidoc.conf
            asciidoc -d book -b docbook -f MyAsciidoc.conf spec.ascd


  spec-fo.xml:  spec.xml Pre-xml
            Pre-xml <spec.xml >spec-fo.xml


  spec-html.xml: spec.xml Pre-xml
            Pre-xml -abstract -oneindex <spec.xml >spec-html.xml


  spec-txt.xml: spec.xml Pre-xml
            Pre-xml -abstract -ascii -noindex <spec.xml >spec-txt.xml


  spec.fo:      spec-fo.xml MyStyle-spec-fo.xsl MyStyle-fo.xsl MyStyle.xsl MyTitleStyle.xsl
            /bin/rm -rf spec.fo spec-fo.fo
            xmlto -x MyStyle-spec-fo.xsl fo spec-fo.xml
            /bin/mv -f spec-fo.fo spec.fo


  spec.ps:      spec.fo
            FOP_OPTS=-Xmx512m fop spec.fo -ps spec.ps


  spec.pdf:     spec.fo
            FOP_OPTS=-Xmx512m fop spec.fo -pdf spec.pdf


  spec.html:    spec-html.xml TidyHTML-spec MyStyle-chunk-html.xsl MyStyle-html.xsl MyStyle.xsl
            /bin/rm -rf spec.html
            xmlto -x MyStyle-chunk-html.xsl -o spec.html html spec-html.xml
            ./TidyHTML-spec


  spec.txt:     spec-txt.xml Tidytxt MyStyle-txt-html.xsl MyStyle-html.xsl MyStyle.xsl
            /bin/rm -rf spec-txt.html
            xmlto -x MyStyle-txt-html.xsl html-nochunks spec-txt.xml
            w3m -dump spec-txt.html | Tidytxt >spec.txt


# I have not found a way of making docbook2texi write its output anywhere
# other than the file name that it makes up. The --to-stdout option does not
# work.

  spec.info:    spec-txt.xml
            docbook2texi spec-txt.xml
            perl -ne 's/conceptindex/cindex/;s/optionindex/findex/;print;' \
          <the_exim_mta.texi >spec.texinfo
            /bin/rm -rf the_exim_mta.texi
            makeinfo -o spec.info spec.texinfo


########################################################################


################################ TEST ##################################

# These targets (similar to the above) are for running little tests.

  test.xml:     test.ascd MyAsciidoc.conf
            asciidoc -d book -b docbook -f MyAsciidoc.conf test.ascd


  test-fo.xml:  test.xml Pre-xml
            ./Pre-xml <test.xml >test-fo.xml


  test-html.xml: test.xml Pre-xml
            ./Pre-xml -abstract -oneindex <test.xml >test-html.xml


  test-txt.xml: test.xml Pre-xml
            ./Pre-xml -abstract -ascii -noindex <test.xml >test-txt.xml


  test.fo:      test-fo.xml MyStyle-spec-fo.xsl MyStyle-fo.xsl MyStyle.xsl MyTitleStyle.xsl
            /bin/rm -rf test.fo test-fo.fo
            xmlto -x MyStyle-spec-fo.xsl fo test-fo.xml
            /bin/mv -f test-fo.fo test.fo


  test.ps:      test.fo
            fop test.fo -ps test.ps


  test.pdf:     test.fo
            fop test.fo -pdf test.pdf


  test.html:    test-html.xml MyStyle-nochunk-html.xsl MyStyle-html.xsl MyStyle.xsl
            /bin/rm -rf test.html test-html.html
            xmlto -x MyStyle-nochunk-html.xsl html-nochunks test-html.xml
            /bin/mv -f test-html.html test.html


  test.txt:     test-txt.xml Tidytxt MyStyle-txt-html.xsl MyStyle-html.xsl MyStyle.xsl
            /bin/rm -rf test-txt.html
            xmlto -x MyStyle-txt-html.xsl html-nochunks test-txt.xml
            w3m -dump test-txt.html | Tidytxt >test.txt


# I have not found a way of making docbook2texi write its output anywhere
# other than the file name that it makes up. The --to-stdout option does not
# work.

  test.info:    test-txt.xml
            docbook2texi test-txt.xml
            perl -ne 's/conceptindex/cindex/;s/optionindex/findex/;print;' \
          <short_title.texi >test.texinfo
            /bin/rm -rf short_title.texi
            makeinfo -o test.info test.texinfo


########################################################################


################################ CLEAN #################################

  clean:; /bin/rm -rf exim.8 \
            filter*.xml spec*.xml test*.xml \
            *.fo *.html *.pdf *.ps \
            filter*.txt spec*.txt test*.txt \
            *.info* *.texinfo *.texi


########################################################################

Index: MyAsciidoc.conf
====================================================================
# $Cambridge: exim/exim-doc/doc-docbook/MyAsciidoc.conf,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Asciidoc configuration customization for creating the DocBook XML sources
# of the Exim specification and the filter document.

[miscellaneous]
newline=\n

[quotes]
_=filename
$=varname
%=option
^=command
^^=function
^%|%^=parameter
``|''=quoted

[tags]
strong=<emphasis role="bold">|</emphasis>

filename=<filename>|</filename>
varname=<varname>$|</varname>
option=<option>|</option>
command=<command>|</command>
function=<function>|</function>
parameter=<parameter>|</parameter>
quoted=<quote>|</quote>


[replacements]
# Nothing - this is for disambiguating markup
"##"=

# -- En dash
(^|[^-])--($|[^-])=\1&#x2013;\2

# --- Em dash
(^|\s+)---($|\s+)=\1&#x8212;\2

# -- Hard space
~=&#x00a0;

# ' automatic apostrophe
([A-Za-z0-9])'([A-Za-z\s])=\1&#x2019;\2

# daggers
!!=&#x2020;
!\?=&#x2021;

# The default markup recognizes subscripts and superscripts using tilde and
# circumflex. We don't want this. These settings manage to turn off the
# effect, while still allowing tilde to be recognized as a hard space.
\^(.+?)\^=^\1^
~(.+?)~=~\1~


[attributes]
# Manual apostrophe: needed for an apostrophe after something quoted, because
# I can't get the automatic one to work in that situation
ap=&#x2019;

# Manual tilde: tilde is defined as a hard space, and it doesn't seem possible
# to quote is using a backslash.
tl=&#x007e;

# Two hyphens, to stop them being treated as an en dash
hh=&#x002d;&#x002d;

# Percent: causes confusion with the quote otherwise
pc=&#x0025;

# Colon: there's a case where this causes trouble
co=&#x003A;

# The sequence "[]" for use in index terms
bk=&#x005B;&#x005D;


# We need to add extra stuff to the <bookinfo> element

[header]
<?xml {xmldecl}?>
<!DOCTYPE book {dtddecl}>

  <book lang="en">
  {doctitle#}<bookinfo>
      <title>{doctitle}</title>
      <titleabbrev>{doctitleabbrev}</titleabbrev>
      <date>{date}</date>
      {authored#}<author>
          <firstname>{firstname}</firstname>
          <othername>{middlename}</othername>
          <surname>{lastname}</surname>
      {authored#}</author>
      <authorinitials>{authorinitials}</authorinitials>
      {revisionhistory%}<revhistory><revision><revnumber>{revision}</revnumber><date>{date}</date>{authorinitials?<authorinitials>{authorinitials}</authorinitials>}{revremark?<revremark>{revremark}</revremark>}</revision></revhistory>
      <corpname>{companyname}</corpname>
      <othercredit><contrib>{othercredit},</contrib></othercredit>
      {copyright#}<copyright><year>{cpyear}</year><holder>{copyright}</holder></copyright>
      <abstract><para>{abstract}</para></abstract>
  {doctitle#}</bookinfo>



# Define a new kind of block that maps to <literallayout> so as not to
# insist on a monospaced font. Delimiter is &&&.

[blockdef-literallayout]
delimiter=^&{3,}(\[(?P<args>.*)\])?=*$
template=literallayoutblock
presubs=specialcharacters,quotes,replacements,macros,callouts

# The template for my non-monospaced literal layout block

[literallayoutblock]
<literallayout{revisionflag? revisionflag="{revisionflag}"}>
|
</literallayout>


# Paragraph substitution - use <para> rather than <simplepara>

[paragraph]
{title#}<formalpara{id? id="{id}"{revisionflag? revisionflag="{revisionflag}"}}><title>{title}</title><para>
{title%}<para{id? id="{id}"}{revisionflag? revisionflag="{revisionflag}"}>
|
{title%}</para>
{title#}</para></formalpara>
{empty}


# Define a special table for left-centre-right lines, filling the whole page
# width, with a border but no separators, for Exim configuration options. It
# would be nice if this could call the default [table] template, forcing the
# appropriate attributes, but I have not found a way of doing this.

[tabledef-conf]
fillchar==
format=csv
template=conf-table
colspec=<colspec align="{colalign}"/>
bodyrow=<row>|</row>
bodydata=<entry>|</entry>

[conf-table]
<{title?table}{title!informaltable}{id? id="{id}"} pgwide="1" frame="all" colsep="0" rowsep="0">
<title>{title}</title>
<tgroup cols="{cols}">
<colspec align="left" colwidth="8*"/>
<colspec align="center" colwidth = "5*"/>
<colspec align="center" colwidth = "5*"/>
<colspec align="right" colwidth = "6*"/>
{headrows#}<thead>
{headrows}
{headrows#}</thead>
{footrows#}<tfoot>
{footrows}
{footrows#}</tfoot>
<tbody>
{bodyrows}
</tbody>
</tgroup>
</{title?table}{title!informaltable}>

# The default indexterm macro generates primary index entries for the
# secondary and tertiary terms as well, which does not make sense
# in the context of the way I write indexes. As well as a replacement
# that does the simple, straightforward thing, we actually want to have
# two different macros: one for concepts and one for options.

  [cindex-inlinemacro]
  # Inline index term for concepts.
  <indexterm role="concept">
    <primary>{1}</primary>
    <secondary>{2}</secondary>
    <tertiary>{3}</tertiary>
  </indexterm>


  [oindex-inlinemacro]
  # Inline index term for options.
  <indexterm role="option">
    <primary>{1}</primary>
    <secondary>{2}</secondary>
    <tertiary>{3}</tertiary>
  </indexterm>


# Allow for the "role" attribute for an index.

[sect-index]
<index{id? id="{id}"}{role? role="{role}"}>
<title>{title}</title>
|
</index>


# Allow for the "titleabbrev" attribute for chapters.

[sect1]
<chapter{id? id="{id}"}>
<title>{title}</title>
<titleabbrev>{titleabbrev}</titleabbrev>
|
</chapter>


#### End ####

Index: MyStyle-chunk-html.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-chunk-html.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<!-- This stylesheet driver imports the DocBook XML stylesheet for chunked
HTML output, and then imports my common stylesheet for HTML output. Finally, it
fiddles with the chunking parameters to arrange for chapter chunking only (no
section chunking). -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<xsl:import href="/usr/share/sgml/docbook/xsl-stylesheets-1.66.1/xhtml/chunk.xsl"/>
<xsl:import href="MyStyle-html.xsl"/>


<!-- No section chunking; don't output the list of chunks -->

<xsl:param name="chunk.section.depth" select="0"></xsl:param>
<xsl:param name="chunk.quietly" select="1"/>


</xsl:stylesheet>

Index: MyStyle-filter-fo.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-filter-fo.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<!-- This stylesheet driver imports the DocBook XML stylesheet for FO output,
and then imports my common stylesheet that makes changes that are wanted for
all forms of output. Then it imports my FO stylesheet that contains changes for
all printed output. Finally, there are some changes that apply only when
printing the filter document. -->

<xsl:import href="/usr/share/sgml/docbook/xsl-stylesheets-1.66.1/fo/docbook.xsl"/>
<xsl:import href="MyStyle.xsl"/>
<xsl:import href="MyStyle-fo.xsl"/>

<!-- For the filter document, we do not want a title page and verso, as it
isn't really a "book", though we use the book XML style. It turns out that this
can be fiddled simply by changing the text "Table of Contents" to the title of
the document.

However, it seems that we have to repeat here the language-specific changes
that are also present in MyStyle.xsl, because this overrides rather than adds
to the settings. -->

  <xsl:param name="local.l10n.xml" select="document('')"/>
  <l:i18n xmlns:l="http://docbook.sourceforge.net/xmlns/l10n/1.0">
    <l:l10n language="en">


     <l:gentext key="TableofContents" text="Exim&#x2019;s interfaces to mail filtering"/>


      <!-- The default (as modified above) gives us "Chapter xxx" or "Section
      xxx", with a capital letter at the start. So we have to make an more
      complicated explicit change to give just the number. -->


      <l:context name="xref-number">
        <l:template name="chapter" text="%n"/>
        <l:template name="sect1" text="%n"/>
        <l:template name="sect2" text="%n"/>
        <l:template name="section" text="%n"/>
      </l:context>


      <!-- I think that having a trailing dot after section numbers looks fussy,
      whereas you need it after just the digits of a chapter number. In both
      cases we want to get rid of the word "chapter" or "section". -->


      <l:context name="title-numbered">
        <l:template name="chapter" text="%n.&#160;%t"/>
        <l:template name="sect1" text="%n&#160;%t"/>
        <l:template name="sect2" text="%n&#160;%t"/>
        <l:template name="section" text="%n&#160;%t"/>
      </l:context>


    </l:l10n>
  </l:i18n>


</xsl:stylesheet>

Index: MyStyle-fo.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-fo.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

  <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                  xmlns:fo="http://www.w3.org/1999/XSL/Format"
                  version="1.0">


<!-- This stylesheet driver contains changes that I want to apply to the
printed output form of both the filter document and the main Exim
specification. It is imported by MyStyle-filter-fo.xsl and MyStyle-spec-fo.xsl.
-->

<xsl:import href="MyTitleStyle.xsl"/>



<!-- Set A4 paper, double sided -->

<xsl:param name="paper.type" select="'A4'"></xsl:param>

<!-- This currently causes errors
<xsl:param name="double.sided" select="1"></xsl:param>
-->

<!-- Allow for typed index entries. The "role" setting works with DocBook
version 4.2 or earlier. Later versions (which we are not currently using)
need "type". -->

<xsl:param name="index.on.type" select="1"></xsl:param>
<xsl:param name="index.on.role" select="1"></xsl:param>


<!-- The default uses short chapter titles in the TOC! I want them only for
use in footer lines. So we have to modify this template. I changed
"titleabbrev.markup" to "title.markup". While I'm here, I also made chapter
entries print in bold. -->

  <xsl:template name="toc.line">
    <xsl:variable name="id">
      <xsl:call-template name="object.id"/>
    </xsl:variable>


    <xsl:variable name="label">
      <xsl:apply-templates select="." mode="label.markup"/>
    </xsl:variable>


    <fo:block text-align-last="justify"
              end-indent="{$toc.indent.width}pt"
              last-line-end-indent="-{$toc.indent.width}pt">
      <fo:inline keep-with-next.within-line="always">
        <!-- Added lines for bold -->
        <xsl:choose>
          <xsl:when test="self::chapter">
            <xsl:attribute name="font-weight">bold</xsl:attribute>
          </xsl:when>
          <xsl:when test="self::index">
            <xsl:attribute name="font-weight">bold</xsl:attribute>
          </xsl:when>
        </xsl:choose>
        <!--  ..................  -->
        <fo:basic-link internal-destination="{$id}">
          <xsl:if test="$label != ''">
            <xsl:copy-of select="$label"/>
            <xsl:value-of select="$autotoc.label.separator"/>
          </xsl:if>
          <xsl:apply-templates select="." mode="title.markup"/>
        </fo:basic-link>
      </fo:inline>
      <fo:inline keep-together.within-line="always">
        <xsl:text> </xsl:text>
        <fo:leader leader-pattern="dots"
                   leader-pattern-width="3pt"
                   leader-alignment="reference-area"
                   keep-with-next.within-line="always"/>
        <xsl:text> </xsl:text>
        <fo:basic-link internal-destination="{$id}">
          <fo:page-number-citation ref-id="{$id}"/>
        </fo:basic-link>
      </fo:inline>
    </fo:block>
  </xsl:template>








<!--
Adjust the sizes of the fonts for titles; the defaults are too gross.
-->

<!-- Level 1 is sect1 level -->

  <xsl:attribute-set name="section.title.level1.properties">
    <xsl:attribute name="font-size">
      <xsl:value-of select="$body.font.master * 1.2"></xsl:value-of>
      <xsl:text>pt</xsl:text>
    </xsl:attribute>
  </xsl:attribute-set>



<!-- Fiddling with chapter titles is more messy -->

  <xsl:template match="title" mode="chapter.titlepage.recto.auto.mode">
    <fo:block xmlns:fo="http://www.w3.org/1999/XSL/Format"
              xsl:use-attribute-sets="chapter.titlepage.recto.style"
              margin-left="{$title.margin.left}"
              font-size="17pt"
              font-weight="bold"
              font-family="{$title.font.family}">
      <xsl:call-template name="component.title">
        <xsl:with-param name="node" select="ancestor-or-self::chapter[1]"/>
      </xsl:call-template>
    </fo:block>
  </xsl:template>


  <xsl:template match="title" mode="chapter.titlepage.verso.auto.mode">
    <fo:block xmlns:fo="http://www.w3.org/1999/XSL/Format"
              xsl:use-attribute-sets="chapter.titlepage.recto.style"
              margin-left="{$title.margin.left}"
              font-size="17pt"
              font-weight="bold"
              font-family="{$title.font.family}">
      <xsl:call-template name="component.title">
        <xsl:with-param name="node" select="ancestor-or-self::chapter[1]"/>
      </xsl:call-template>
    </fo:block>
  </xsl:template>



<!-- This provides a hard pagebreak mechanism as a get-out -->

  <xsl:template match="processing-instruction('hard-pagebreak')">
    <fo:block xmlns:fo="http://www.w3.org/1999/XSL/Format" break-before='page'>
    </fo:block>
  </xsl:template>



<!-- Sort out the footer. Useful information is available at
http://www.sagehill.net/docbookxsl/PrintHeaders.html
-->


  <xsl:attribute-set name="footer.content.properties">
    <!-- <xsl:attribute name="font-family">serif</xsl:attribute> -->
    <!-- <xsl:attribute name="font-size">9pt</xsl:attribute> -->
    <xsl:attribute name="font-style">italic</xsl:attribute>
  </xsl:attribute-set>





<!-- Things that can be inserted into the footer are:

<fo:page-number/>
Inserts the current page number.

<xsl:apply-templates select="." mode="title.markup"/>
Inserts the title of the current chapter, appendix, or other component.

<xsl:apply-templates select="." mode="titleabbrev.markup"/>
Inserts the titleabbrev of the current chapter, appendix, or other component,
if it is available. Otherwise it inserts the regular title.

<xsl:apply-templates select="." mode="object.title.markup"/>
Inserts the chapter title with chapter number label. Likewise for appendices.

  <fo:retrieve-marker ... />      Used to retrieve the current section name.


<xsl:apply-templates select="//corpauthor[1]"/>
Inserts the value of the first corpauthor element found anywhere in the
document.

  <xsl:call-template name="datetime.format">
    <xsl:with-param ...
  Inserts a date timestamp.


<xsl:call-template name="draft.text"/>
Inserts the Draft message if draft.mode is currently on.

<fo:external-graphic ... />
Inserts a graphical image.
See the section Graphic in header or footer for details.
-->


  <xsl:template name="footer.content">
    <xsl:param name="pageclass" select="''"/>
    <xsl:param name="sequence" select="''"/>
    <xsl:param name="position" select="''"/>
    <xsl:param name="gentext-key" select="''"/>


    <fo:block>
      <!-- pageclass can be front, body, back -->
      <!-- sequence can be odd, even, first, blank -->
      <!-- position can be left, center, right -->
      <xsl:choose>
        <xsl:when test="$pageclass = 'titlepage'">
          <!-- nop; no footer on title pages -->
        </xsl:when>


        <xsl:when test="$double.sided != 0 and $sequence = 'even'
                        and $position='left'">
          <fo:page-number/>
        </xsl:when>


        <xsl:when test="$double.sided != 0 and ($sequence = 'odd' or $sequence = 'first')
                        and $position='right'">
          <fo:page-number/>
        </xsl:when>


        <xsl:when test="$double.sided = 0 and $position='center'">
          <fo:page-number/>
        </xsl:when>


        <xsl:when test="$double.sided = 0 and $position='right'">
          <xsl:apply-templates select="." mode="titleabbrev.markup"/>
        </xsl:when>


        <xsl:when test="$sequence='blank'">
          <xsl:choose>
            <xsl:when test="$double.sided != 0 and $position = 'left'">
              <fo:page-number/>
            </xsl:when>
            <xsl:when test="$double.sided = 0 and $position = 'center'">
              <fo:page-number/>
            </xsl:when>
            <xsl:otherwise>
              <!-- nop -->
            </xsl:otherwise>
          </xsl:choose>
        </xsl:when>


        <xsl:otherwise>
          <!-- nop -->
        </xsl:otherwise>
      </xsl:choose>
    </fo:block>
  </xsl:template>


</xsl:stylesheet>

Index: MyStyle-html.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-html.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<!-- This stylesheet driver imports my common stylesheet that makes some
changes that are wanted for all forms of output. Then it makes changes that are
specific to HTML output. -->

<xsl:import href="MyStyle.xsl"/>

<xsl:param name="shade.verbatim" select="1"></xsl:param>

  <xsl:attribute-set name="shade.verbatim.style">
    <xsl:attribute name="bgcolor">#F0F0E0</xsl:attribute>
    <xsl:attribute name="width">100%</xsl:attribute>
    <xsl:attribute name="cellpadding">2</xsl:attribute>
    <xsl:attribute name="border">0</xsl:attribute>
  </xsl:attribute-set>


<!-- This is how you can make use of a CSS stylesheet, but at present I'm
not doing so. -->

<!--
<xsl:param name="html.stylesheet" select="'Myhtml.css'"/>
-->


<!-- This removes the title of the current page from the top of the page -
redundant because each page is a chapter, whose title shows just below. It also
removes the titles of the next/prev at the bottom of the page, but I don't
think that matters too much. -->

<xsl:param name="navig.showtitles" select="'0'"/>


<!-- This allows for the setting of RevisionFlag on elements. -->

<xsl:param name="show.revisionflag" select="'1'"/>

  <xsl:template name="system.head.content">
  <style type="text/css">
  <xsl:text>
  div.added    { background-color: #ffff99; }
  div.deleted  { text-decoration: line-through;
                 background-color: #FF7F7F; }
  div.changed  { background-color: #99ff99; }
  div.off      {  }


  span.added   { background-color: #ffff99; }
  span.deleted { text-decoration: line-through;
                 background-color: #FF7F7F; }
  span.changed { background-color: #99ff99; }
  span.off     {  }
  </xsl:text>
  </style>
  </xsl:template>


  <xsl:template match="*[@revisionflag]">
    <xsl:choose>
      <xsl:when test="local-name(.) = 'para' or local-name(.) = 'simpara'                     or local-name(.) = 'formalpara'                     or local-name(.) = 'section'                     or local-name(.) = 'sect1'                     or local-name(.) = 'sect2'                     or local-name(.) = 'sect3'                     or local-name(.) = 'sect4'                     or local-name(.) = 'sect5'                     or local-name(.) = 'chapter'                     or local-name(.) = 'preface'                     or local-name(.) = 'itemizedlist'                     or local-name(.) = 'varlistentry'                     or local-name(.) = 'glossary'                     or local-name(.) = 'bibliography'                     or local-name(.) = 'index'                     or local-name(.) = 'appendix'">
        <div class="{@revisionflag}">
          <xsl:apply-imports/>
        </div>
      </xsl:when>
      <xsl:when test="local-name(.) = 'phrase' or local-name(.) = 'ulink'                     or local-name(.) = 'link'                     or local-name(.) = 'filename'                     or local-name(.) = 'literal'                     or local-name(.) = 'member'                     or local-name(.) = 'glossterm'                     or local-name(.) = 'sgmltag'                     or local-name(.) = 'quote'                     or local-name(.) = 'emphasis'                     or local-name(.) = 'command'                     or local-name(.) = 'xref'">
        <span class="{@revisionflag}">
          <xsl:apply-imports/>
        </span>
      </xsl:when>
      <xsl:when test="local-name(.) = 'listitem' or local-name(.) = 'entry'                     or local-name(.) = 'title'">
        <!-- nop; these are handled directly in the stylesheet -->
        <xsl:apply-imports/>
      </xsl:when>
      <xsl:otherwise>
        <xsl:message>
          <xsl:text>Revisionflag on unexpected element: </xsl:text>
          <xsl:value-of select="local-name(.)"/>
          <xsl:text> (Assuming block)</xsl:text>
        </xsl:message>
        <div class="{@revisionflag}">
          <xsl:apply-imports/>
        </div>
      </xsl:otherwise>
    </xsl:choose>
  </xsl:template>



<!-- The default uses short chapter titles in the TOC! I want them only for
use in footer lines in printed output. So we have to modify this template. I
changed "titleabbrev.markup" to "title.markup". -->

  <xsl:template name="toc.line">
    <xsl:param name="toc-context" select="."/>
    <xsl:param name="depth" select="1"/>
    <xsl:param name="depth.from.context" select="8"/>


   <span>
    <xsl:attribute name="class"><xsl:value-of select="local-name(.)"/></xsl:attribute>
    <a>
      <xsl:attribute name="href">
        <xsl:call-template name="href.target">
          <xsl:with-param name="context" select="$toc-context"/>
        </xsl:call-template>
      </xsl:attribute>


      <xsl:variable name="label">
        <xsl:apply-templates select="." mode="label.markup"/>
      </xsl:variable>
      <xsl:copy-of select="$label"/>
      <xsl:if test="$label != ''">
        <xsl:value-of select="$autotoc.label.separator"/>
      </xsl:if>


      <xsl:apply-templates select="." mode="title.markup"/>
    </a>
    </span>
  </xsl:template>



<!-- The default stylesheets generate both chapters and sections with <h2>
headings in the HTML. The argument is that the HTML headings don't go deep
enough to match the DocBook levels. But surely it would be better to stop them
at the bottom end? Anyway, the Exim documents have only one level of section
within chapters, and even if they went to two, it wouldn't exhaust HTML's
capabilities. So I have copied the style stuff here, making a 1-character
change from "+ 1" to "+ 2" in roughly the middle. -->

  <xsl:template name="section.heading">
    <xsl:param name="section" select="."/>
    <xsl:param name="level" select="1"/>
    <xsl:param name="allow-anchors" select="1"/>
    <xsl:param name="title"/>
    <xsl:param name="class" select="'title'"/>


    <xsl:variable name="id">
      <xsl:choose>
        <!-- if title is in an *info wrapper, get the grandparent -->
        <xsl:when test="contains(local-name(..), 'info')">
          <xsl:call-template name="object.id">
            <xsl:with-param name="object" select="../.."/>
          </xsl:call-template>
        </xsl:when>
        <xsl:otherwise>
          <xsl:call-template name="object.id">
            <xsl:with-param name="object" select=".."/>
          </xsl:call-template>
        </xsl:otherwise>
      </xsl:choose>
    </xsl:variable>


    <!-- HTML H level is two higher than section level -->
    <xsl:variable name="hlevel" select="$level + 2"/>
    <xsl:element name="h{$hlevel}">
      <xsl:attribute name="class"><xsl:value-of select="$class"/></xsl:attribute>
      <xsl:if test="$css.decoration != '0'">
        <xsl:if test="$hlevel&lt;3">
          <xsl:attribute name="style">clear: both</xsl:attribute>
        </xsl:if>
      </xsl:if>
      <xsl:if test="$allow-anchors != 0">
        <xsl:call-template name="anchor">
          <xsl:with-param name="node" select="$section"/>
          <xsl:with-param name="conditional" select="0"/>
        </xsl:call-template>
      </xsl:if>
      <xsl:copy-of select="$title"/>
    </xsl:element>
  </xsl:template>



</xsl:stylesheet>

Index: MyStyle-nochunk-html.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-nochunk-html.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<!-- This stylesheet driver imports the DocBook XML stylesheet for unchunked
HTML output, and then imports my common stylesheet for HTML output. -->

<xsl:import href="/usr/share/sgml/docbook/xsl-stylesheets-1.66.1/xhtml/docbook.xsl"/>
<xsl:import href="MyStyle-html.xsl"/>

</xsl:stylesheet>

Index: MyStyle-spec-fo.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-spec-fo.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<!-- This stylesheet driver imports the DocBook XML stylesheet for FO output,
and then imports my common stylesheet that makes changes that are wanted for
all forms of output. Then it imports my FO stylesheet that contains changes for
all printed output. Finally, there are some changes that apply only when
printing the Exim specification document. -->

<xsl:import href="/usr/share/sgml/docbook/xsl-stylesheets-1.66.1/fo/docbook.xsl"/>
<xsl:import href="MyStyle.xsl"/>
<xsl:import href="MyStyle-fo.xsl"/>

<!-- Nothing special for the full spec document yet -->

</xsl:stylesheet>

Index: MyStyle-txt-html.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle-txt-html.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<!-- This stylesheet driver imports the DocBook XML stylesheet for unchunked
HTML output, and then imports my common stylesheet for HTML output. Then it
adds an instruction to use "(c)" for copyright rather than the Unicode
character. -->

<xsl:import href="/usr/share/sgml/docbook/xsl-stylesheets-1.66.1/xhtml/docbook.xsl"/>
<xsl:import href="MyStyle-html.xsl"/>

  <xsl:template name="dingbat.characters">
    <xsl:param name="dingbat">bullet</xsl:param>
    <xsl:choose>
      <xsl:when test="$dingbat='copyright'">(c)</xsl:when>
      <xsl:otherwise>
        <xsl:text>?</xsl:text>
      </xsl:otherwise>
    </xsl:choose>
  </xsl:template>


</xsl:stylesheet>

Index: MyStyle.xsl
====================================================================
<!-- $Cambridge: exim/exim-doc/doc-docbook/MyStyle.xsl,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>

<!-- This file contains changes to the Docbook XML stylesheets that I want to
have happen in all forms of output. It is imported by all the drivers. -->


<!-- Set body font size -->

<xsl:param name="body.font.master">11</xsl:param>

<!-- Set no relative indent for titles and body -->

<xsl:param name="title.margin.left">0pt</xsl:param>


<!-- This removes the dot at the end of run-in titles, which we use
for formal paragraphs for command line options. -->

<xsl:param name="runinhead.default.title.end.punct" select="' '"></xsl:param>


<!-- Without this setting, variable lists get misformatted in the FO case,
causing overprinting. Maybe with a later release of fop the need to do this
might go away. -->

<xsl:param name="variablelist.as.blocks" select="1"></xsl:param>


<!--
Cause sections to be numbered, and to include the outer component number.
-->

<xsl:param name="section.autolabel">1</xsl:param>
<xsl:param name="section.label.includes.component.label">1</xsl:param>


<!--
Specify TOCs only for top-level things. No TOCs for components (e.g. chapters)
-->

  <xsl:param name="generate.toc">
  article   toc,title
  book      toc,title
  </xsl:param>



<!-- Turn off the poor hyphenation -->

<xsl:param name="hyphenate">false</xsl:param>


<!--
Generate only numbers, no titles, in cross references.
-->

<xsl:param name="xref.with.number.and.title">0</xsl:param>


<!-- Hopefully this might do something useful? It doesn't seem to. -->

<xsl:param name="fop.extensions" select="1"></xsl:param>


<!-- Output variable names in italic rather than the default monospace. -->

  <xsl:template match="varname">
    <xsl:call-template name="inline.italicseq"/>
  </xsl:template>



<!-- Output file names in italic rather than the default monospace. -->

  <xsl:template match="filename">
    <xsl:call-template name="inline.italicseq"/>
  </xsl:template>



<!-- Output options in bold rather than the default monospace. -->

  <xsl:template match="option">
    <xsl:call-template name="inline.boldseq"/>
  </xsl:template>



<!--
Make a number of more detailed changes to the style that involve more than just
fiddling with a parameter.
-->

  <xsl:param name="local.l10n.xml" select="document('')"/>
  <l:i18n xmlns:l="http://docbook.sourceforge.net/xmlns/l10n/1.0">
    <l:l10n language="en">


      <!-- The default (as modified above) gives us "Chapter xxx" or "Section
      xxx", with a capital letter at the start. So we have to make an more
      complicated explicit change to give just the number. -->


      <l:context name="xref-number">
        <l:template name="chapter" text="%n"/>
        <l:template name="sect1" text="%n"/>
        <l:template name="sect2" text="%n"/>
        <l:template name="section" text="%n"/>
      </l:context>


      <!-- I think that having a trailing dot after section numbers looks fussy,
      whereas you need it after just the digits of a chapter number. In both
      cases we want to get rid of the word "chapter" or "section". -->


      <l:context name="title-numbered">
        <l:template name="chapter" text="%n.&#160;%t"/>
        <l:template name="sect1" text="%n&#160;%t"/>
        <l:template name="sect2" text="%n&#160;%t"/>
        <l:template name="section" text="%n&#160;%t"/>
      </l:context>


    </l:l10n>
  </l:i18n>



<!-- The default has far too much space on either side of displays and lists -->

  <xsl:attribute-set name="verbatim.properties">
    <xsl:attribute name="space-before.minimum">0em</xsl:attribute>
    <xsl:attribute name="space-before.optimum">0em</xsl:attribute>
    <xsl:attribute name="space-before.maximum">0em</xsl:attribute>
    <xsl:attribute name="space-after.minimum">0em</xsl:attribute>
    <xsl:attribute name="space-after.optimum">0em</xsl:attribute>
    <xsl:attribute name="space-after.maximum">0em</xsl:attribute>
    <xsl:attribute name="start-indent">0.3in</xsl:attribute>
  </xsl:attribute-set>


  <xsl:attribute-set name="list.block.spacing">
    <xsl:attribute name="space-before.optimum">0em</xsl:attribute>
    <xsl:attribute name="space-before.minimum">0em</xsl:attribute>
    <xsl:attribute name="space-before.maximum">0em</xsl:attribute>
    <xsl:attribute name="space-after.optimum">0em</xsl:attribute>
    <xsl:attribute name="space-after.minimum">0em</xsl:attribute>
    <xsl:attribute name="space-after.maximum">0em</xsl:attribute>
  </xsl:attribute-set>


<!-- List item spacing -->

  <xsl:attribute-set name="list.item.spacing">
    <xsl:attribute name="space-before.optimum">0.8em</xsl:attribute>
    <xsl:attribute name="space-before.minimum">0.8em</xsl:attribute>
    <xsl:attribute name="space-before.maximum">1em</xsl:attribute>
  </xsl:attribute-set>


<!-- Reduce the space after informal tables -->

  <xsl:attribute-set name="informal.object.properties">
    <xsl:attribute name="space-before.minimum">1em</xsl:attribute>
    <xsl:attribute name="space-before.optimum">1em</xsl:attribute>
    <xsl:attribute name="space-before.maximum">2em</xsl:attribute>
    <xsl:attribute name="space-after.minimum">0em</xsl:attribute>
    <xsl:attribute name="space-after.optimum">0em</xsl:attribute>
    <xsl:attribute name="space-after.maximum">0em</xsl:attribute>
  </xsl:attribute-set>


<!-- Reduce the space after section titles. 0 is not small enough. -->

  <xsl:attribute-set name="section.title.level1.properties">
    <xsl:attribute name="space-after.minimum">-6pt</xsl:attribute>
    <xsl:attribute name="space-after.optimum">-4pt</xsl:attribute>
    <xsl:attribute name="space-after.maximum">0pt</xsl:attribute>
  </xsl:attribute-set>


<!-- Slightly reduce the space before paragraphs -->

  <xsl:attribute-set name="normal.para.spacing">
    <xsl:attribute name="space-before.optimum">0.8em</xsl:attribute>
    <xsl:attribute name="space-before.minimum">0.8em</xsl:attribute>
    <xsl:attribute name="space-before.maximum">1.0em</xsl:attribute>
  </xsl:attribute-set>



  <xsl:attribute-set name="table.cell.padding">
    <xsl:attribute name="padding-left">2pt</xsl:attribute>
    <xsl:attribute name="padding-right">2pt</xsl:attribute>
    <xsl:attribute name="padding-top">0pt</xsl:attribute>
    <xsl:attribute name="padding-bottom">0pt</xsl:attribute>
  </xsl:attribute-set>




<!-- Turn off page header rule -->
<xsl:param name="header.rule" select="0"></xsl:param>

<!-- Remove page header content -->
<xsl:template name="header.content"/>

<!-- Remove space for page header -->
<xsl:param name="body.margin.top" select="'0in'"></xsl:param>
<xsl:param name="region.before.extent" select="'0in'"></xsl:param>

<!-- Turn off page footer rule -->
<xsl:param name="footer.rule" select="0"></xsl:param>


</xsl:stylesheet>

Index: MyTitlepage.templates.xml
====================================================================
<!DOCTYPE t:templates [
<!ENTITY hsize0 "10pt">
<!ENTITY hsize1 "12pt">
<!ENTITY hsize2 "14.4pt">
<!ENTITY hsize3 "17.28pt">
<!ENTITY hsize4 "20.736pt">
<!ENTITY hsize5 "24.8832pt">
<!ENTITY hsize0space "7.5pt"> <!-- 0.75 * hsize0 -->
<!ENTITY hsize1space "9pt"> <!-- 0.75 * hsize1 -->
<!ENTITY hsize2space "10.8pt"> <!-- 0.75 * hsize2 -->
<!ENTITY hsize3space "12.96pt"> <!-- 0.75 * hsize3 -->
<!ENTITY hsize4space "15.552pt"> <!-- 0.75 * hsize4 -->
<!ENTITY hsize5space "18.6624pt"> <!-- 0.75 * hsize5 -->
]>

<!-- $Cambridge: exim/exim-doc/doc-docbook/MyTitlepage.templates.xml,v 1.1 2005/06/16 10:32:31 ph10 Exp $ -->

<!-- This document is copied from the DocBook XSL stylesheets, and modified to
do what I want it to do for the Exim reference manual. Process this document
with:

  xsltproc -output MyTitleStyle.xsl \
    /usr/share/sgml/docbook/xsl-stylesheets-1.66.1/template/titlepage.xsl \
    MyTitlepage.templates.xml


in order to generate a style sheet called MyTitleStyle.xsl. That is then
included in my customization stylesheet. What a lot of heavyweight apparatus we
need to set up! -->


  <t:templates xmlns:t="http://nwalsh.com/docbook/xsl/template/1.0"
               xmlns:param="http://nwalsh.com/docbook/xsl/template/1.0/param"
               xmlns:fo="http://www.w3.org/1999/XSL/Format"
               xmlns:xsl="http://www.w3.org/1999/XSL/Transform">


  <!-- ********************************************************************
       $Id: titlepage.templates.xml,v 1.23 2003/12/16 00:30:49 bobstayton Exp $
       ********************************************************************


       This file is part of the DocBook XSL Stylesheet distribution.
       See ../README or http://docbook.sf.net/ for copyright
       and other information.


       ******************************************************************** -->


<!-- ==================================================================== -->

    <t:titlepage t:element="book" t:wrapper="fo:block">
      <t:titlepage-content t:side="recto">
        <title
               t:named-template="division.title"
               param:node="ancestor-or-self::book[1]"
               text-align="center"
               font-size="&hsize5;"
               space-before="&hsize5space;"
               font-weight="bold"
               font-family="{$title.fontset}"/>
        <subtitle
                  text-align="center"
                  font-size="&hsize4;"
                  space-before="&hsize4space;"
                  font-family="{$title.fontset}"/>
        <corpauthor font-size="&hsize3;"
                    keep-with-next="always"
                    space-before="2in"/>
        <authorgroup space-before="2in"/>
        <author font-size="&hsize3;"
                space-before="&hsize2space;"
                keep-with-next="always"/>
      </t:titlepage-content>


    <t:titlepage-content t:side="verso">
        <title
               t:named-template="book.verso.title"
               font-size="&hsize2;"
               font-weight="bold"
               font-family="{$title.fontset}"/>
        <corpauthor/>
        <authorgroup t:named-template="verso.authorgroup"/>
        <author/>
        <othercredit/>
        <pubdate space-before="1em"/>
        <abstract/>
        <copyright/>
        <legalnotice font-size="8pt"/>
    </t:titlepage-content>


  <!-- This change stops it putting a blank page after the verso -->
    <t:titlepage-separator>
  <!--      <fo:block break-after="page"/> -->
    </t:titlepage-separator>


    <t:titlepage-before t:side="recto">
    </t:titlepage-before>


    <t:titlepage-before t:side="verso">
        <fo:block break-after="page"/>
    </t:titlepage-before>
  </t:titlepage>


</t:templates>

Index: Myhtml.css
====================================================================
# $Cambridge: exim/exim-doc/doc-docbook/Myhtml.css,v 1.1 2005/06/16 10:32:31 ph10 Exp $

  .screen {
          font-family: monospace;
          font-size: 1em;
          display: block;
          padding: 10px;
          border: 1px solid #bbb;
          background-color: #eee;
          color: #000;
          overflow: auto;
          border-radius: 2.5px;
          -moz-border-radius: 2.5px;
          margin: 0.5em 2em;


}

  .programlisting {
          font-family: monospace;
          font-size: 1em;
          display: block;
          padding: 10px;
          border: 1px solid #bbb;
          background-color: #ddd;
          color: #000;
          overflow: auto;
          border-radius: 2.5px;
          -moz-border-radius: 2.5px;
          margin: 0.5em 2em;
  }



Index: Pre-xml
====================================================================
#! /usr/bin/perl

# $Cambridge: exim/exim-doc/doc-docbook/Pre-xml,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Script to pre-process XML input before processing it for various purposes.
# Options specify which transformations are to be done. Monospaced literal
# layout blocks are never touched.

# Changes:

# -abstract: Remove the <abstract> element

  # -ascii:    Replace &8230;   (sic, no x) with ...
  #            Replace &#x2019; by '
  #            Replace &#x201C; by "
  #            Replace &#x201D; by "
  #            Replace &#x2013; by -
  #            Replace &#x2020; by *
  #            Replace &#x2021; by **
  #            Replace &#x00a0; by a space
  #            Replace &#169;   by (c)
  #            Put quotes round <literal> text
  #            Put quotes round <quote> text


# -bookinfo: Remove the <bookinfo> element from the file

  # -fi:       Replace "fi" by &#xFB01; except when it is in an XML element, or
  #            inside a <literal>.


# -noindex Remove the XML to generate a Concept and an Options index.
# -oneindex Ditto, but add XML to generate a single index.



# The function that processes non-literal monospaced text

sub process()
{
my($s) = $_[0];

$s =~ s/fi(?![^<>]*>)/&#xFB01;/g if $ligatures;

  if ($ascii)
    {
    $s =~ s/&#8230;/.../g;
    $s =~ s/&#x2019;/'/g;
    $s =~ s/&#x201C;/"/g;
    $s =~ s/&#x201D;/"/g;
    $s =~ s/&#x2013;/-/g;
    $s =~ s/&#x2020;/*/g;
    $s =~ s/&#x2021;/**/g;
    $s =~ s/&#x00a0;/ /g;
    $s =~ s/&#x00a9;/(c)/g;
    $s =~ s/<quote>/"/g;
    $s =~ s/<\/quote>/"/g;
    }


$s;
}


# The main program

  $abstract  = 0;
  $ascii     = 0;
  $bookinfo  = 0;
  $inliteral = 0;
  $ligatures = 0;
  $madeindex = 0;
  $noindex   = 0;
  $oneindex  = 0;


  foreach $arg (@ARGV)
    {
    if    ($arg eq "-fi")       { $ligatures = 1; }
    elsif ($arg eq "-abstract") { $abstract = 1; }
    elsif ($arg eq "-ascii")    { $ascii = 1; }
    elsif ($arg eq "-bookinfo") { $bookinfo = 1; }
    elsif ($arg eq "-noindex")  { $noindex = 1; }
    elsif ($arg eq "-oneindex") { $oneindex = 1; }
    else  { die "** Pre-xml: Unknown option \"$arg\"\n"; }
    }


  while (<STDIN>)
    {
    # Remove <abstract> if required


    next if ($abstract && /^\s*<abstract>/);


    # Remove <bookinfo> if required


    if ($bookinfo && /^<bookinfo/)
      {
      while (<STDIN>) { last if /^<\/bookinfo/; }
      next;
      }


    # Copy monospaced literallayout blocks


    if (/^<literallayout class="monospaced">/)
      {
      print;
      while (<STDIN>)
        {
        print;
        last if /^<\/literallayout>/;
        }
      next;
      }


    # Adjust index-generation code if required


    if (($noindex || $oneindex) && /^<index[\s>]/)
      {
      while (<STDIN>)
        {
        last if /^<\/index>/;
        }


      if ($oneindex && !$madeindex)
        {
        $madeindex = 1;
        print "<index><title>Index</title></index>\n";
        }


      next;
      }


    # A line that is not in a monospaced literal block; keep track of which
    # parts are in <literal> and which not. The latter get processed by the
    # function above.


    for (;;)
      {
      if ($inliteral)
        {
        if (/^(.*?)<\/literal>(.*)$/)
          {
          print $1;
          print "\"" if $ascii;
          print "</literal>";
          $inliteral = 0;
          $_ = "$2\n";
          }
        else
          {
          print;
          last;
          }
        }


      # Not in literal state


      else
        {
        if (/^(.*?)<literal>(.*)$/)
          {
          print &process($1);
          print "<literal>";
          print "\"" if $ascii;
          $inliteral = 1;
          $_ = "$2\n";
          }
        else
          {
          print &process($_);
          last;
          }
        }
      }    # Loop for different parts of one line
    }      # Loop for multiple lines


# End

Index: TidyHTML-filter
====================================================================
#! /usr/bin/perl

# $Cambridge: exim/exim-doc/doc-docbook/TidyHTML-filter,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Script to tidy up the filter HTML file that is generated by xmlto. The
# following changes are made:
#
# 1. Split very long lines.
# 2. Create reverse links from chapter and section titles back to the TOC.


$tocref = 1;

# Read in the filter.html file.

open(IN, "filter.html") || die "Failed to open filter.html for reading: $!\n";
@text = <IN>;
close(IN);

# Insert a newline after every > because the whole toc is generated as one
# humungous line that is hard to check. Then split the lines so that each one
# is a separate element in the vector.

  foreach $line (@text) { $line =~ s/>\s*/>\n/g; }
  for ($i = 0; $i < scalar(@text); $i++)
    { splice @text, $i, 1, (split /(?<=\n)/, $text[$i]); }


# We want to create reverse links from each chapter and section title back to
# the relevant place in the TOC. Scan the TOC for the relevant entries. Add
# an id to each entry, and create tables that remember the new link ids. We
# detect the start of the TOC by <div class="toc" and the end of the TOC by
# <div class="chapter".

# Skip to start of TOC

  for ($i = 0; $i < scalar(@text); $i++)
    {
    last if $text[$i] =~ /^<div class="toc"/;
    }


# Scan the TOC

  for (; $i < scalar(@text); $i++)
    {
    last if $text[$i] =~ /^<div class="chapter"/;
    if ($text[$i] =~ /^<a href="(#[^"]+)">/)
      {
      my($ss) = $1;
      my($id) = sprintf "%04d", $tocref++;
      $text[$i] =~ s/<a/<a id="toc$id"/;
      $backref{"$ss"} = "toc$id";
      }
    }


# Scan remainder of the document

  for (; $i < scalar(@text); $i++)
    {
    if ($text[$i] =~ /^<h[23] /)
      {
      $i++;
      if ($text[$i] =~ /^<a( xmlns="[^"]+")? id="([^"]+)">$/)
        {
        my($ref) = $backref{"#$2"};
        $text[$i++] = "<a$1 href=\"#$ref\" id=\"$2\">\n";
        my($temp) = $text[$i];
        $text[$i] = $text[$i+1];
        $text[++$i] = $temp;
        }
      }
    }


# Write out the revised file

open(OUT, ">filter.html") || die "Failed to open filter.html for writing: $!\n";
print OUT @text;
close(OUT);

# End

Index: TidyHTML-spec
====================================================================
#! /usr/bin/perl

# $Cambridge: exim/exim-doc/doc-docbook/TidyHTML-spec,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Script to tidy up the spec HTML files that are generated by xmlto. The
# following changes are made:
#
# 1. Tidy the index.html file by splitting the very long lines.
# 2. Create reverse links from chapter and section titles back to the TOC.
# 3. Tidy the ix01.html file - the actual index - by splitting long lines.
# 4. Insert links from the letter divisions to the top of the Index.

chdir "spec.html";

$tocref = 1;

# Read in the index.html file. It's really the TOC.

open(IN, "index.html") || die "Failed to open index.html for reading: $!\n";
@toc = <IN>;
close(IN);

# Insert a newline after every > because the whole toc is generated as one
# humungous line that is hard to check. Then split the lines so that each one
# is a separate element in the vector.

  foreach $line (@toc) { $line =~ s/>\s*/>\n/g; }
  for ($i = 0; $i < scalar(@toc); $i++)
    { splice @toc, $i, 1, (split /(?<=\n)/, $toc[$i]); }


# We want to create reverse links from each chapter and section title back to
# the relevant place in the TOC. Scan the TOC for the relevant entries. Add
# an id to each entry, and create tables that remember the file names and the
# new link ids.

  foreach $line (@toc)
    {
    if ($line =~ /^<a href="((?:ch|ix)\d+\.html)(#[^"]+)?">/)
      {
      my($chix) = $1;
      my($ss) = $2;
      my($id) = sprintf "%04d", $tocref++;
      $line =~ s/<a/<a id="toc$id"/;
      $backref{"$chix$ss"} = "toc$id";
      push @chlist, $chix;
      }
    }


# Write out the modified index.html file.

open (OUT, ">index.html") || die "Failed to open index.html for writing: $!\n";
print OUT @toc;
close(OUT);

# Now scan each of the other page files and insert the reverse links.

  foreach $file (@chlist)
    {
    open(IN, "$file") || die "Failed to open $file for reading: $!\n";
    @text = <IN>;
    close(IN);


    foreach $line (@text)
      {
      if ($line =~ /^(.*?)<a( xmlns="[^"]+")? id="([^"]+)"><\/a>(.+?)<\/h(.*)$/)
        {
        my($pre, $opt, $id, $title, $post) = ($1, $2, $3, $4, $5);


        # Section reference
        my($ref) = $backref{"$file#$id"};


        # If not found, try for a chapter reference
        $ref = $backref{"$file"} if !defined $ref;


        # Adjust the line
        $line = "$pre<a$opt href=\"index.html#$ref\" id=\"$id\">$title</a></h$post";
        }
      }


    open(OUT, ">$file") || die "Failed to open $file for writing: $!\n";
    print OUT @text;
    close(OUT);
    }


# Now process the ix01.html file

open(IN, "ix01.html") || die "Failed to open ix01.html for reading: $!\n";
@index = <IN>;
close(IN);

# Insert a newline after every > because the whole index is generated as one
# humungous line that is hard to check. Then split the lines so that each one
# is a separate element in the vector.

  foreach $line (@index) { $line =~ s/>\s*/>\n/g; }
  for ($i = 0; $i < scalar(@index); $i++)
    { splice @index, $i, 1, (split /(?<=\n)/, $index[$i]); }


# We want to add a list of letters at the top of the index, and link back
# to them from each letter heading. First find the index title and remember
# where to insert the list of letters.

  for ($i = 0; $i < scalar(@index); $i++)
    {
    if ($index[$i] =~ /^<\/h2>$/)
      {
      $listindex = $i;
      last;
      }
    }


# Now scan through for the letter headings and build the cross references,
# while also building up the list to insert.

  $list = "<h4>\n";
  for (; $i < scalar(@index); $i++)
    {
    if ($index[$i] =~ /^(.)<\/h3>$/)
      {
      $letter = $1;
      $index[$i-1] =~ s/^/<a id="${letter}B" href="#${letter}T">/;
      $index[$i] =~ s/$/<\/a>/;
      $list .= "<a id=\"${letter}T\" href=\"#${letter}B\"> $letter</a>\n";
      }
    }


# Now we know which letters we have, we can insert the list.

$list .= "</h4>\n";
splice @index, $listindex, 0, $list;

# Write out the modified index.html file.

open (OUT, ">ix01.html") || die "Failed to open ix01.html for writing: $!\n";
print OUT @index;
close(OUT);


# End

Index: Tidytxt
====================================================================
#! /usr/bin/perl

# $Cambridge: exim/exim-doc/doc-docbook/Tidytxt,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Script to tidy up the output of w3m when it makes a text file. We convert
# sequences of blank lines into a single blank line.

  $blanks = 0;
  while (<>)
    {
    if (/^\s*$/)
      {
      $blanks++;
      next;
      }
    print "\n" if $blanks > 0;
    $blanks = 0;
    print;
    }


# End

Index: filter.ascd
====================================================================
///
$Cambridge: exim/exim-doc/doc-docbook/filter.ascd,v 1.1 2005/06/16 10:32:31 ph10 Exp $

This file contains the Asciidoc source for the document that describes Exim's
filtering facilities from a user's point of view. See the file AdMarkup.txt for
an explanation of the markup that is used. It is more or less standard
Asciidoc, but with a few changes and additions.
///


///
This preliminary stuff creates a <bookinfo> entry in the XML. This is removed
when creating the PostScript/PDF output, because we do not want a full-blown
title page created for those versions. The stylesheet fudges up a title line to
replace the text "Table of contents". However, for the other forms of output,
the <bookinfo> element is retained and used.
///

  Exim's interfaces to mail filtering
  ===================================
  :author:          Philip Hazel
  :copyright:       University of Cambridge
  :cpyear:          2005
  :date:            13 May 2005
  :doctitleabbrev:  Exim filtering
  :revision:        4.50



//////////////////////////////////////////////////////////////////////////////
***WARNING*** Do not put anything, not even a titleabbrev setting, before
the first chapter (luckily it does not need one) because if you do, AsciiDoc
creates an empty <preface> element, which we do not want.
//////////////////////////////////////////////////////////////////////////////


Forwarding and filtering in Exim
--------------------------------

This document describes the user interfaces to Exim's in-built mail filtering
facilities, and is copyright (C) University of Cambridge 2005. It corresponds
to Exim version 4.50.



Introduction
~~~~~~~~~~~~
Most Unix mail transfer agents (programs that deliver mail) permit individual
users to specify automatic forwarding of their mail, usually by placing a list
of forwarding addresses in a file called '.forward' in their home directories.
Exim extends this facility by allowing the forwarding instructions to be a set
of rules rather than just a list of addresses, in effect providing ``'.forward'
with conditions''. Operating the set of rules is called 'filtering', and the
file that contains them is called a 'filter file'.

Exim supports two different kinds of filter file. An 'Exim filter' contains
instructions in a format that is unique to Exim. A 'Sieve filter' contains
instructions in the Sieve format that is defined by RFC 3028. As this is a
standard format, Sieve filter files may already be familiar to some users.
Sieve files should also be portable between different environments. However,
the Exim filtering facility contains more features (such as variable
expansion), and better integration with the host environment (such as the use
of external processes and pipes).

The choice of which kind of filter to use can be left to the end-user, provided
that the system administrator has configured Exim appropriately for both kinds
of filter. However, if interoperability is important, Sieve is the only
choice.

The ability to use filtering or traditional forwarding has to be enabled by the
system administrator, and some of the individual facilities can be separately
enabled or disabled. A local document should be provided to describe exactly
what has been enabled. In the absence of this, consult your system
administrator.

This document describes how to use a filter file and the format of its
contents. It is intended for use by end-users. Both Sieve filters and Exim
filters are covered. However, for Sieve filters, only issues that relate to the
Exim implementation are discussed, since Sieve itself is described elsewhere.

The contents of traditional '.forward' files are not described here. They
normally contain just a list of addresses, file names, or pipe commands,
separated by commas or newlines, but other types of item are also available.
The full details can be found in the chapter on the ^redirect^ router in the
Exim specification, which also describes how the system administrator can set
up and control the use of filtering.



Filter operation
~~~~~~~~~~~~~~~~
It is important to realize that, in Exim, no deliveries are actually made while
a filter or traditional '.forward' file is being processed. Running a filter
or processing a traditional '.forward' file sets up future delivery
operations, but does not carry them out.

The result of filter or '.forward' file processing is a list of destinations
to which a message should be delivered. The deliveries themselves take place
later, along with all other deliveries for the message. This means that it is
not possible to test for successful deliveries while filtering. It also means
that any duplicate addresses that are generated are dropped, because Exim never
delivers the same message to the same address more than once.




[[SECTtesting]]
Testing a new filter file
~~~~~~~~~~~~~~~~~~~~~~~~~
Filter files, especially the more complicated ones, should always be tested, as
it is easy to make mistakes. Exim provides a facility for preliminary testing
of a filter file before installing it. This tests the syntax of the file and
its basic operation, and can also be used with traditional '.forward' files.

Because a filter can do tests on the content of messages, a test message is
required. Suppose you have a new filter file called 'myfilter' and a test
message called 'test-message'. Assuming that Exim is installed with the
conventional path name '/usr/sbin/sendmail' (some operating systems use
'/usr/lib/sendmail'), the following command can be used:

    /usr/sbin/sendmail -bf myfilter <test-message


The %-bf% option tells Exim that the following item on the command line is the
name of a filter file that is to be tested. There is also a %-bF% option,
which is similar, but which is used for testing system filter files, as opposed
to user filter files, and which is therefore of use only to the system
administrator.

The test message is supplied on the standard input. If there are no
message-dependent tests in the filter, an empty file ('/dev/null') can be
used. A supplied message must start with header lines or the ``From'' message
separator line which is found in many multi-message folder files. Note that
blank lines at the start terminate the header lines. A warning is given if no
header lines are read.

The result of running this command, provided no errors are detected in the
filter file, is a list of the actions that Exim would try to take if presented
with the message for real.
For example, for an Exim filter, the output

    Deliver message to: gulliver@???
    Save message to: /home/lemuel/mail/archive


means that one copy of the message would be sent to
'gulliver@???', and another would be added to the file
_/home/lemuel/mail/archive_, if all went well.

The actions themselves are not attempted while testing a filter file in this
way; there is no check, for example, that any forwarding addresses are valid.
For an Exim filter,
if you want to know why a particular action is being taken, add the %-v%
option to the command. This causes Exim to output the results of any
conditional tests and to indent its output according to the depth of nesting of
^if^ commands. Further additional output from a filter test can be generated
by the ^testprint^ command, which is described below.

When Exim is outputting a list of the actions it would take, if any text
strings are included in the output, non-printing characters therein are
converted to escape sequences. In particular, if any text string contains a
newline character, this is shown as ``\n'' in the testing output.

When testing a filter in this way, Exim makes up an ``envelope'' for the message.
The recipient is by default the user running the command, and so is the sender,
but the command can be run with the %-f% option to supply a different sender.
For example,

  ...
  /usr/sbin/sendmail -bf myfilter \
     -f islington@??? <test-message
  ...


Alternatively, if the %-f% option is not used, but the first line of the
supplied message is a ``From'' separator from a message folder file (not the same
thing as a 'From:' header line), the sender is taken from there. If %-f% is
present, the contents of any ``From'' line are ignored.

The ``return path'' is the same as the envelope sender, unless the message
contains a 'Return-path:' header, in which case it is taken from there. You
need not worry about any of this unless you want to test out features of a
filter file that rely on the sender address or the return path.

It is possible to change the envelope recipient by specifying further options.
The %-bfd% option changes the domain of the recipient address, while the
%-bfl% option changes the ``local part'', that is, the part before the @ sign.
An adviser could make use of these to test someone else's filter file.

The %-bfp% and %-bfs% options specify the prefix or suffix for the local part.
These are relevant only when support for multiple personal mailboxes is
implemented; see the description in section <<SECTmbox>> below.


Installing a filter file
~~~~~~~~~~~~~~~~~~~~~~~~
A filter file is normally installed under the name '.forward' in your home
directory -- it is distinguished from a conventional '.forward' file by its
first line (described below). However, the file name is configurable, and some
system administrators may choose to use some different name or location for
filter files.


Testing an installed filter file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Testing a filter file before installation cannot find every potential problem;
for example, it does not actually run commands to which messages are piped.
Some ``live'' tests should therefore also be done once a filter is installed.

If at all possible, test your filter file by sending messages from some other
account. If you send a message to yourself from the filtered account, and
delivery fails, the error message will be sent back to the same account, which
may cause another delivery failure. It won't cause an infinite sequence of such
messages, because delivery failure messages do not themselves generate further
messages. However, it does mean that the failure won't be returned to you, and
also that the postmaster will have to investigate the stuck message.

If you have to test an Exim filter from the same account, a sensible precaution
is to include the line

    if error_message then finish endif


as the first filter command, at least while testing. This causes filtering to
be abandoned for a delivery failure message, and since no destinations are
generated, the message goes on to be delivered to the original address. Unless
there is a good reason for not doing so, it is recommended that the above test
be left in all Exim filter files.
(This does not apply to Sieve files.)



Details of filtering commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The filtering commands for Sieve and Exim filters are completely different in
syntax and semantics. The Sieve mechanism is defined in RFC 3028; in the next
chapter we describe how it is integrated into Exim. The subsequent chapter
covers Exim filtering commands in detail.



[[CHAPsievefilter]]
Sieve filter files
------------------
The code for Sieve filtering in Exim was contributed by Michael Haardt, and
most of the content of this chapter is taken from the notes he provided. Since
Sieve is a extensible language, it is important to understand ``Sieve'' in this
context as ``the specific implementation of Sieve for Exim''.

This chapter does not contain a description of Sieve, since that can be found
in RFC 3028, which should be read in conjunction with these notes.

The Exim Sieve implementation offers the core as defined by RFC 3028,
comparison tests, the *copy*, *envelope*, *fileinto*, and *vacation*
extensions, but not the *reject* extension. Exim does not support message
delivery notifications (MDNs), so adding it just to the Sieve filter (as
required for *reject*) makes little sense.

In order for Sieve to work properly in Exim, the system administrator needs to
make some adjustments to the Exim configuration. These are described in the
chapter on the ^redirect^ router in the full Exim specification.


Recognition of Sieve filters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A filter file is interpreted as a Sieve filter if its first line is

    # Sieve filter


This is what distinguishes it from a conventional '.forward' file or an Exim
filter file.



Saving to specified folders
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the system administrator has set things up as suggested in the Exim
specification, and you use *keep* or *fileinto* to save a mail into a
folder, absolute files are stored where specified, relative files are stored
relative to $home$, and *inbox* goes to the standard mailbox location.



Strings containing header names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RFC 3028 does not specify what happens if a string denoting a header field does
not contain a valid header name, for example, it contains a colon. This
implementation generates an error instead of ignoring the header field in order
to ease script debugging, which fits in the common picture of Sieve.



Exists test with empty list of headers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The *exists* test succeeds only if all specified headers exist. RFC 3028
does not explicitly specify what happens on an empty list of headers. This
implementation evaluates that condition as true, interpreting the RFC in a
strict sense.



Header test with invalid MIME encoding in header
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some MUAs process invalid base64 encoded data, generating junk.
Others ignore junk after seeing an equal sign in base64 encoded data.
RFC 2047 does not specify how to react in this case, other than stating
that a client must not forbid to process a message for that reason.
RFC 2045 specifies that invalid data should be ignored (apparently
looking at end of line characters). It also specifies that invalid data
may lead to rejecting messages containing them (and there it appears to
talk about true encoding violations), which is a clear contradiction to
ignoring them.

RFC 3028 does not specify how to process incorrect MIME words.
This implementation treats them literally, as it does if the word is
correct but its character set cannot be converted to UTF-8.



Address test for multiple addresses per header
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A header may contain multiple addresses. RFC 3028 does not explicitly
specify how to deal with them, but since the address test checks if
anything matches anything else, matching one address suffices to
satisfy the condition. That makes it impossible to test if a header
contains a certain set of addresses and no more, but it is more logical
than letting the test fail if the header contains an additional address
besides the one the test checks for.



Semantics of keep
~~~~~~~~~~~~~~~~~
The *keep* command is equivalent to

    fileinto "inbox";


It saves the message and resets the implicit keep flag. It does not set the
implicit keep flag; there is no command to set it once it has been reset.



Semantics of fileinto
~~~~~~~~~~~~~~~~~~~~~
RFC 3028 does not specify whether %fileinto% should try to create a mail folder
if it does not exist. This implementation allows the sysadmin to configure that
aspect using the ^appendfile^ transport options %create_directory%,
%create_file%, and %file_must_exist%. See the ^appendfile^ transport in
the Exim specification for details.



Semantics of redirect
~~~~~~~~~~~~~~~~~~~~~
Sieve scripts are supposed to be interoperable between servers, so this
implementation does not allow mail to be redirected to unqualified addresses,
because the domain would depend on the system being used. On systems with
virtual mail domains, the default domain is probably not what the user expects
it to be.



String arguments
~~~~~~~~~~~~~~~~
There has been confusion if the string arguments to *require* are to be
matched case-sensitively or not. This implementation matches them with
the match type ^:is^ (default, see section 2.7.1) and the comparator
^i;ascii-casemap^ (default, see section 2.7.3). The RFC defines the
command defaults clearly, so any different implementations violate RFC
3028. The same is valid for comparator names, also specified as strings.



Number units
~~~~~~~~~~~~
There is a mistake in RFC 3028: the suffix G denotes gibi-, not tebibyte.
The mistake is obvious, because RFC 3028 specifies G to denote 2^30
(which is gibi, not tebi), and that is what this implementation uses as
scaling factor for the suffix G.



RFC compliance
~~~~~~~~~~~~~~
Exim requires the first line of a Sieve filter to be

    # Sieve filter


Of course the RFC does not specify that line. Do not expect examples to work
without adding it, though.

RFC 3028 requires the use of CRLF to terminate a line.
The rationale was that CRLF is universally used in network protocols
to mark the end of the line. This implementation does not embed Sieve
in a network protocol, but uses Sieve scripts as part of the Exim MTA.
Since all parts of Exim use LF as newline character, this implementation
does, too, by default, though the system administrator may choose (at Exim
compile time) to use CRLF instead.

Exim violates RFC 2822, section 3.6.8, by accepting 8-bit header names, so
this implementation repeats this violation to stay consistent with Exim.
This is in preparation to UTF-8 data.

Sieve scripts cannot contain NUL characters in strings, but mail
headers could contain MIME encoded NUL characters, which could never
be matched by Sieve scripts using exact comparisons. For that reason,
this implementation extends the Sieve quoted string syntax with \0
to describe a NUL character, violating \0 being the same as 0 in
RFC 3028. Even without using \0, the following tests are all true in
this implementation. Implementations that use C-style strings will only
evaluate the first test as true.

    Subject: =?iso-8859-1?q?abc=00def


    header :contains "Subject" ["abc"]
    header :contains "Subject" ["def"]
    header :matches "Subject" ["abc?def"]


Note that by considering Sieve to be a MUA, RFC 2047 can be interpreted
in a way that NUL characters truncating strings is allowed for Sieve
implementations, although not recommended. It is further allowed to use
encoded NUL characters in headers, but that's not recommended either.
The above example shows why.

RFC 3028 states that if an implementation fails to convert a character
set to UTF-8, two strings cannot be equal if one contains octets greater
than 127. Assuming that all unknown character sets are one-byte character
sets with the lower 128 octets being US-ASCII is not sound, so this
implementation violates RFC 3028 and treats such MIME words literally.
That way at least something could be matched.

The folder specified by *fileinto* must not contain the character
sequence ``..'' to avoid security problems. RFC 3028 does not specify the
syntax of folders apart from *keep* being equivalent to

    fileinto "INBOX";


This implementation uses _inbox_ instead.

Sieve script errors currently cause messages to be silently filed into
_inbox_. RFC 3028 requires that the user is notified of that condition.
This may be implemented in future by adding a header line to mails that
are filed into _inbox_ due to an error in the filter.



[[CHAPeximfilter]]
Exim filter files
-----------------
This chapter contains a full description of the contents of Exim filter files.


Format of Exim filter files
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Apart from leading white space, the first text in an Exim filter file must be

    # Exim filter


This is what distinguishes it from a conventional '.forward' file or a Sieve
filter file. If the file does not have this initial line (or the equivalent for
a Sieve filter), it is treated as a conventional '.forward' file, both when
delivering mail and when using the %-bf% testing mechanism. The white space in
the line is optional, and any capitalization may be used. Further text on the
same line is treated as a comment. For example, you could have

    #   Exim filter   <<== do not edit or remove this line!


The remainder of the file is a sequence of filtering commands, which consist of
keywords and data values. For example, in the command

    deliver gulliver@???


the keyword is `deliver` and the data value is
`gulliver@???`. White space or line breaks separate the
components of a command, except in the case of conditions for the ^if^ command,
where round brackets (parentheses) also act as separators. Complete commands
are separated from each other by white space or line breaks; there are no
special terminators. Thus, several commands may appear on one line, or one
command may be spread over a number of lines.

If the character # follows a separator anywhere in a command, everything from
# up to the next newline is ignored. This provides a way of including comments
in a filter file.


Data values in filter commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are two ways in which a data value can be input:

- If the text contains no white space then it can be typed verbatim. However, if
it is part of a condition, it must also be free of round brackets
(parentheses), as these are used for grouping in conditions.

- Otherwise, it must be enclosed in double quotation marks. In this case, the
character \ (backslash) is treated as an ``escape character'' within the string,
causing the following character or characters to be treated specially:

&&&&
`\n` is replaced by a newline
`\r` is replaced by a carriage return
`\t` is replaced by a tab
&&&&

Backslash followed by up to three octal digits is replaced by the character
specified by those digits, and \x followed by up to two hexadecimal digits is
treated similarly. Backslash followed by any other character is replaced
by the second character, so that in particular, \\" becomes " and \\ becomes
\. A data item enclosed in double quotes can be continued onto the next line
by ending the first line with a backslash. Any leading white space at the start
of the continuation line is ignored.

In addition to the escape character processing that occurs when strings are
enclosed in quotes, most data values are also subject to 'string expansion'
(as described in the next section), in which case the characters `\$` and `\`
are also significant. This means that if a single backslash is actually
required in such a string, and the string is also quoted, \\\\ has to be
entered.

The maximum permitted length of a data string, before expansion, is 1024
characters.


++++++++++++
<?hard-pagebreak?>
++++++++++++

[[SECTfilterstringexpansion]]
String expansion
~~~~~~~~~~~~~~~~
Most data values are expanded before use. Expansion consists of replacing
substrings beginning with `\$` with other text. The full expansion facilities
available in Exim are extensive. If you want to know everything that Exim can
do with strings, you should consult the chapter on string expansion in the Exim
documentation.

In filter files, by far the most common use of string expansion is the
substitution of the contents of a variable. For example, the substring

    $reply_address


is replaced by the address to which replies to the message should be sent. If
such a variable name is followed by a letter or digit or underscore, it must be
enclosed in curly brackets (braces), for example,

    ${reply_address}


If a `\$` character is actually required in an expanded string, it must be
escaped with a backslash, and because backslash is also an escape character in
quoted input strings, it must be doubled in that case. The following two
examples illustrate two different ways of testing for a `\$` character in a
message:

    if $message_body contains \$ then ...
    if $message_body contains "\\$" then ...


You can prevent part of a string from being expanded by enclosing it between
two occurrences of `\N`. For example,

    if $message_body contains \N$$$$\N then ...


tests for a run of four dollar characters.


Some useful general variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A complete list of the available variables is given in the Exim documentation.
This shortened list contains the ones that are most likely to be useful in
personal filter files:

$body_linecount$: The number of lines in the body of the message.

$body_zerocount$: The number of binary zero characters in the body of the
message.


$home$: In conventional configurations, this variable normally contains the
user's home directory. The system administrator can, however, change this.

$local_part$: The part of the email address that precedes the @ sign --
normally the user's login name. If support for multiple personal mailboxes is
enabled (see section <<SECTmbox>> below) and a prefix or suffix for the local
part was recognized, it is removed from the string in this variable.

$local_part_prefix$: If support for multiple personal mailboxes is enabled
(see section <<SECTmbox>> below), and a local part prefix was recognized,
this variable contains the prefix. Otherwise it contains an empty string.

$local_part_suffix$: If support for multiple personal mailboxes is enabled
(see section <<SECTmbox>> below), and a local part suffix was recognized,
this variable contains the suffix. Otherwise it contains an empty string.

$message_body$: The initial portion of the body of the message. By default,
up to 500 characters are read into this variable, but the system administrator
can configure this to some other value. Newlines in the body are converted into
single spaces.

$message_body_end$: The final portion of the body of the message, formatted
and limited in the same way as $message_body$.

$message_body_size$: The size of the body of the message, in bytes.

$message_headers$: The header lines of the message, concatenated into a
single string, with newline characters between them.

$message_id$: The message's local identification string, which is unique for
each message handled by a single host.

$message_size$: The size of the entire message, in bytes.

$original_local_part$: When an address that arrived with the message is
being processed, this contains the same value as the variable $local_part$.
However, if an address generated by an alias, forward, or filter file is being
processed, this variable contains the local part of the original address.

$reply_address$: The contents of the 'Reply-to:' header, if the message
has one; otherwise the contents of the 'From:' header. It is the address to
which normal replies to the message should be sent.

$return_path$: The return path -- that is, the sender field that will be
transmitted as part of the message's envelope if the message is sent to another
host. This is the address to which delivery errors are sent. In many cases,
this variable has the same value as $sender_address$, but if, for example,
an incoming message to a mailing list has been expanded, $return_path$ may
have been changed to contain the address of the list maintainer.

$sender_address$: The sender address that was received in the envelope of
the message. This is not necessarily the same as the contents of the 'From:'
or 'Sender:' header lines. For delivery error messages (``bounce messages'')
there is no sender address, and this variable is empty.

$tod_full$: A full version of the time and date, for example: Wed, 18 Oct
1995 09:51:40 +0100. The timezone is always given as a numerical offset from
GMT.

$tod_log$: The time and date in the format used for writing Exim's log files,
without the timezone, for example: 1995-10-12 15:32:29.

$tod_zone$: The local timezone offset, for example: +0100.



[[SECTheadervariables]]
Header variables
~~~~~~~~~~~~~~~~
There is a special set of expansion variables containing the header lines of
the message being processed. These variables have names beginning with
$header_$ followed by the name of the header line, terminated by a colon.
For example,

    $header_from:
    $header_subject:


The whole item, including the terminating colon, is replaced by the contents of
the message header line. If there is more than one header line with the same
name, their contents are concatenated. For header lines whose data consists of
a list of addresses (for example, 'From:' and 'To:'), a comma and newline is
inserted between each set of data. For all other header lines, just a newline
is used.

Leading and trailing white space is removed from header line data, and if there
are any MIME ``words'' that are encoded as defined by RFC 2047 (because they
contain non-ASCII characters), they are decoded and translated, if possible, to
a local character set. Translation is attempted only on operating systems that
have the ^^iconv()^^ function. This makes the header line look the same as it
would when displayed by an MUA. The default character set is ISO-8859-1, but
this can be changed by means of the ^headers^ command (see below).

If you want to see the actual characters that make up a header line, you can
specify $rheader_$ instead of $header_$. This inserts the ``raw''
header line, unmodified.

There is also an intermediate form, requested by $bheader_$, which removes
leading and trailing space and decodes MIME ``words'', but does not do any
character translation. If an attempt to decode what looks superficially like a
MIME ``word'' fails, the raw string is returned. If decoding produces a binary
zero character, it is replaced by a question mark.

The capitalization of the name following $header_$ is not significant.
Because any printing character except colon may appear in the name of a
message's header (this is a requirement of RFC 2822, the document that
describes the format of a mail message) curly brackets must 'not' be used in
this case, as they will be taken as part of the header name. Two shortcuts are
allowed in naming header variables:

- The initiating $header_$, $rheader_$, or $bheader_$ can be
abbreviated to $h_$, $rh_$, or $bh_$, respectively.

- The terminating colon can be omitted if the next character is white space. The
white space character is retained in the expanded string. However, this is not
recommended, because it makes it easy to forget the colon when it really is
needed.

If the message does not contain a header of the given name, an empty string is
substituted. Thus it is important to spell the names of headers correctly. Do
not use $header_Reply_to$ when you really mean $header_Reply-to$.


User variables
~~~~~~~~~~~~~~
There are ten user variables with names $n0$ -- $n9$ that can be
incremented by the ^add^ command (see section <<SECTadd>>). These can be used
for ``scoring'' messages in various ways. If Exim is configured to run a
``system filter'' on every message, the values left in these variables are
copied into the variables $sn0$ -- $sn9$ at the end of the system filter, thus
making them available to users' filter files. How these values are used is
entirely up to the individual installation.


Current directory
~~~~~~~~~~~~~~~~~
The contents of your filter file should not make any assumptions about the
current directory. It is best to use absolute paths for file names; you
can normally make use of the $home$ variable to refer to your home directory.
The ^save^ command automatically inserts $home$ at the start of non-absolute
paths.




[[SECTsigdel]]
Significant deliveries
~~~~~~~~~~~~~~~~~~~~~~
When in the course of delivery a message is processed by a filter file, what
happens next, that is, after the filter file has been processed, depends on
whether or not the filter sets up any 'significant deliveries'. If at least
one significant delivery is set up, the filter is considered to have handled
the entire delivery arrangements for the current address, and no further
processing of the address takes place. If, however, no significant deliveries
are set up, Exim continues processing the current address as if there were no
filter file, and typically sets up a delivery of a copy of the message into a
local mailbox. In particular, this happens in the special case of a filter file
containing only comments.

The delivery commands ^deliver^, ^save^, and ^pipe^ are by default
significant. However, if such a command is preceded by the word ^unseen^, its
delivery is not considered to be significant. In contrast, other commands such
as ^mail^ and ^vacation^ do not set up significant deliveries unless
preceded by the word ^seen^.

The following example commands set up significant deliveries:

    deliver jack@???
    pipe $home/bin/mymailscript
    seen mail subject "message discarded"
    seen finish


The following example commands do not set up significant deliveries:

    unseen deliver jack@???
    unseen pipe $home/bin/mymailscript
    mail subject "message discarded"
    finish





Filter commands
~~~~~~~~~~~~~~~
The filter commands that are described in subsequent sections are listed
below, with the section in which they are described in brackets:

  [frame="none"]
  `-------------`-----------------------------------------------
  ^add^         ~~increment a user variable (section <<SECTadd>>)
  ^deliver^     ~~deliver to an email address (section <<SECTdeliver>>)
  ^fail^        ~~force delivery failure (sysadmin use) (section <<SECTfail>>)
  ^finish^      ~~end processing (section <<SECTfinish>>)
  ^freeze^      ~~freeze message (sysadmin use) (section <<SECTfreeze>>)
  ^headers^     ~~set the header character set (section <<SECTheaders>>)
  ^if^          ~~test condition(s) (section <<SECTif>>)
  ^logfile^     ~~define log file (section <<SECTlog>>)
  ^logwrite^    ~~write to log file (section <<SECTlog>>)
  ^mail^        ~~send a reply message (section <<SECTmail>>)
  ^pipe^        ~~pipe to a command (section <<SECTpipe>>)
  ^save^        ~~save to a file (section <<SECTsave>>)
  ^testprint^   ~~print while testing (section <<SECTtestprint>>)
  ^vacation^    ~~tailored form of ^mail^ (section <<SECTmail>>)
  --------------------------------------------------------------


The ^headers^ command has additional parameters that can be used only in a
system filter. The ^fail^ and ^freeze^ commands are available only when
Exim's filtering facilities are being used as a system filter, and are
therefore usable only by the system administrator and not by ordinary users.
They are mentioned only briefly in this document; for more information, see the
main Exim specification.



  [[SECTadd]]
  The add command
  ~~~~~~~~~~~~~~~
  &&&
  `     add `<'number'>` to `<'user variable'>
  `e.g. add 2 to n3`
  &&&


There are 10 user variables of this type, with names $n0$ -- $n9$. Their
values can be obtained by the normal expansion syntax (for example $n3$) in
other commands. At the start of filtering, these variables all contain zero.
Both arguments of the ^add^ command are expanded before use, making it
possible to add variables to each other. Subtraction can be obtained by adding
negative numbers.



[[SECTdeliver]]
The deliver command
~~~~~~~~~~~~~~~~~~~

  &&&
  `     deliver` <'mail address'>
  `e.g. deliver "Dr Livingstone <David@???>"`
  &&&


This command provides a forwarding operation. The delivery that it sets up is
significant unless the command is preceded by ^unseen^ (see section
<<SECTsigdel>>). The message is sent on to the given address, exactly as
happens if the address had appeared in a traditional '.forward' file. If you
want to deliver the message to a number of different addresses, you can use
more than one ^deliver^ command (each one may have only one address). However,
duplicate addresses are discarded.

To deliver a copy of the message to your normal mailbox, your login name can be
given as the address. Once an address has been processed by the filtering
mechanism, an identical generated address will not be so processed again, so
doing this does not cause a loop.

However, if you have a mail alias, you should 'not' refer to it here. For
example, if the mail address 'L.Gulliver' is aliased to 'lg303' then all
references in Gulliver's '.forward' file should be to 'lg303'. A reference
to the alias will not work for messages that are addressed to that alias,
since, like '.forward' file processing, aliasing is performed only once on an
address, in order to avoid looping.

Following the new address, an optional second address, preceded by
^errors_to^ may appear. This changes the address to which delivery errors on
the forwarded message will be sent. Instead of going to the message's original
sender, they go to this new address. For ordinary users, the only value that is
permitted for this address is the user whose filter file is being processed.
For example, the user 'lg303' whose mailbox is in the domain
'lilliput.example' could have a filter file that contains

    deliver jon@??? errors_to lg303@???


Clearly, using this feature makes sense only in situations where not all
messages are being forwarded. In particular, bounce messages must not be
forwarded in this way, as this is likely to create a mail loop if something
goes wrong.



  [[SECTsave]]
  The save command
  ~~~~~~~~~~~~~~~~
  &&&
  `     save `<'file name'>
  `e.g. save $home/mail/bookfolder`
  &&&


This command specifies that a copy of the message is to be appended to the
given file (that is, the file is to be used as a mail folder). The delivery
that ^save^ sets up is significant unless the command is preceded by
^unseen^ (see section <<SECTsigdel>>).

More than one ^save^ command may be obeyed; each one causes a copy of the
message to be written to its argument file, provided they are different
(duplicate ^save^ commands are ignored).

If the file name does not start with a / character, the contents of the
$home$ variable are prepended, unless it is empty. In conventional
configurations, this variable is normally set in a user filter to the user's
home directory, but the system administrator may set it to some other path. In
some configurations, $home$ may be unset, in which case a non-absolute path
name may be generated. Such configurations convert this to an absolute path
when the delivery takes place. In a system filter, $home$ is never set.

The user must of course have permission to write to the file, and the writing
of the file takes place in a process that is running as the user, under the
user's primary group. Any secondary groups to which the user may belong are not
normally taken into account, though the system administrator can configure Exim
to set them up. In addition, the ability to use this command at all is
controlled by the system administrator -- it may be forbidden on some systems.

An optional mode value may be given after the file name. The value for the mode
is interpreted as an octal number, even if it does not begin with a zero. For
example:

    save /some/folder 640


This makes it possible for users to override the system-wide mode setting for
file deliveries, which is normally 600. If an existing file does not have the
correct mode, it is changed.

An alternative form of delivery may be enabled on your system, in which each
message is delivered into a new file in a given directory. If this is the case,
this functionality can be requested by giving the directory name terminated by
a slash after the ^save^ command, for example

    save separated/messages/


There are several different formats for such deliveries; check with your system
administrator or local documentation to find out which (if any) are available
on your system. If this functionality is not enabled, the use of a path name
ending in a slash causes an error.



  [[SECTpipe]]
  The pipe command
  ~~~~~~~~~~~~~~~~
  &&&
  `     pipe `<'command'>
  `e.g. pipe "$home/bin/countmail $sender_address"`
  &&&


This command specifies that the message is to be delivered to the specified
command using a pipe. The delivery that it sets up is significant unless the
command is preceded by ^unseen^ (see section <<SECTsigdel>>). Remember,
however, that no deliveries are done while the filter is being processed. All
deliveries happen later on. Therefore, the result of running the pipe is not
available to the filter.

When the deliveries are done, a separate process is run, and a copy of the
message is passed on its standard input. The process runs as the user, under
the user's primary group. Any secondary groups to which the user may belong are
not normally taken into account, though the system administrator can configure
Exim to set them up. More than one ^pipe^ command may appear; each one causes
a copy of the message to be written to its argument pipe, provided they are
different (duplicate ^pipe^ commands are ignored).

When the time comes to transport the message,
the command supplied to ^pipe^ is split up by Exim into a command name and a
number of arguments. These are delimited by white space except for arguments
enclosed in double quotes, in which case backslash is interpreted as an escape,
or in single quotes, in which case no escaping is recognized. Note that as the
whole command is normally supplied in double quotes, a second level of quoting
is required for internal double quotes. For example:

    pipe "$home/myscript \"size is $message_size\""


String expansion is performed on the separate components after the line has
been split up, and the command is then run directly by Exim; it is not run
under a shell. Therefore, substitution cannot change the number of arguments,
nor can quotes, backslashes or other shell metacharacters in variables cause
confusion.

Documentation for some programs that are normally run via this kind of pipe
often suggest that the command should start with

    IFS=" "


This is a shell command, and should 'not' be present in Exim filter files,
since it does not normally run the command under a shell.

However, there is an option that the administrator can set to cause a shell to
be used. In this case, the entire command is expanded as a single string and
passed to the shell for interpretation. It is recommended that this be avoided
if at all possible, since it can lead to problems when inserted variables
contain shell metacharacters.

The default PATH set up for the command is determined by the system
administrator, usually containing at least _/usr/bin_ so that common commands
are available without having to specify an absolute file name. However, it is
possible for the system administrator to restrict the pipe facility so that the
command name must not contain any / characters, and must be found in one of the
directories in the configured PATH. It is also possible for the system
administrator to lock out the use of the ^pipe^ command altogether.

When the command is run, a number of environment variables are set up. The
complete list for pipe deliveries may be found in the Exim reference manual.
Those that may be useful for pipe deliveries from user filter files are:

  &&&
  `DOMAIN            `   the domain of the address
  `HOME              `   your home directory
  `LOCAL_PART        `   see below
  `LOCAL_PART_PREFIX `   see below
  `LOCAL_PART_SUFFIX `   see below
  `LOGNAME           `   your login name
  `MESSAGE_ID        `   the unique id of the message
  `PATH              `   the command search path
  `RECIPIENT         `   the complete recipient address
  `SENDER            `   the sender of the message
  `SHELL             `   `/bin/sh`
  `USER              `   see below
  &&&


LOCAL_PART, LOGNAME, and USER are all set to the same value,
namely, your login id. LOCAL_PART_PREFIX and LOCAL_PART_SUFFIX may
be set if Exim is configured to recognize prefixes or suffixes in the local
parts of addresses. For example, a message addressed to
'pat-suf2@???' may cause the filter for user 'pat' to be run. If
this sets up a pipe delivery, LOCAL_PART_SUFFIX is `-suf2` when the
pipe command runs. The system administrator has to configure Exim specially for
this feature to be available.

If you run a command that is a shell script, be very careful in your use of
data from the incoming message in the commands in your script. RFC 2822 is very
generous in the characters that are permitted to appear in mail addresses, and
in particular, an address may begin with a vertical bar or a slash. For this
reason you should always use quotes round any arguments that involve data from
the message, like this:

    /some/command '$SENDER'


so that inserted shell meta-characters do not cause unwanted effects.

Remember that, as was explained earlier, the pipe command is not run at the
time the filter file is interpreted. The filter just defines what deliveries
are required for one particular addressee of a message. The deliveries
themselves happen later, once Exim has decided everything that needs to be done
for the message.

A consequence of this is that you cannot inspect the return code from the pipe
command from within the filter. Nevertheless, the code returned by the command
is important, because Exim uses it to decide whether the delivery has succeeded
or failed.

The command should return a zero completion code if all has gone well. Most
non-zero codes are treated by Exim as indicating a failure of the pipe. This is
treated as a delivery failure, causing the message to be returned to its
sender. However, there are some completion codes that are treated as temporary
errors. The message remains on Exim's spool disk, and the delivery is tried
again later, though it will ultimately time out if the delivery failures go on
too long. The completion codes to which this applies can be specified by the
system administrator; the default values are 73 and 75.

The pipe command should not normally write anything to its standard output or
standard error file descriptors. If it does, whatever is written is normally
returned to the sender of the message as a delivery error, though this action
can be varied by the system administrator.



[[SECTmail]]
Mail commands
~~~~~~~~~~~~~
There are two commands that cause the creation of a new mail message, neither
of which count as a significant delivery unless the command is preceded by the
word ^seen^ (see section <<SECTsigdel>>). This is a powerful facility, but it
should be used with care, because of the danger of creating infinite sequences
of messages. The system administrator can forbid the use of these commands
altogether.

To help prevent runaway message sequences, these commands have no effect when
the incoming message is a bounce (delivery error) message, and messages sent by
this means are treated as if they were reporting delivery errors. Thus, they
should never themselves cause a bounce message to be returned. The basic
mail-sending command is

  &&&
  `mail [to `<'address-list'>`]`
  `     [cc `<'address-list'>`]`
  `     [bcc `<'address-list'>`]`
  `     [from `<'address'>`]`
  `     [reply_to `<'address'>`]`
  `     [subject `<'text'>`]`
  `     [extra_headers `<'text'>`]`
  `     [text `<'text'>`]`
  `     [[expand] file `<'filename'>`]`
  `     [return message]`
  `     [log `<'log file name'>`]`
  `     [once `<'note file name'>`]`
  `     [once_repeat `<'time interval'>`]`


`e.g. mail text "Your message about $h_subject: has been received"`
&&&

Each <'address-list'> can contain a number of addresses, separated by commas,
in the format of a 'To:' or 'Cc:' header line. In fact, the text you supply
here is copied exactly into the appropriate header line. It may contain
additional information as well as email addresses. For example:

  ...
  mail to "Julius Caesar <jc@???>, \
           <ma@???> (Mark A.)"
  ...


Similarly, the texts supplied for ^from^ and ^reply_to^ are copied into
their respective header lines.

As a convenience for use in one common case, there is also a command called
^vacation^. It behaves in the same way as ^mail^, except that the defaults for
the %subject%, %file%, %log%, %once%, and %once_repeat% options are

    subject "On vacation"
    expand file .vacation.msg
    log  .vacation.log
    once .vacation
    once_repeat 7d


respectively. These are the same file names and repeat period used by the
traditional Unix ^vacation^ command. The defaults can be overridden by
explicit settings, but if a file name is given its contents are expanded only
if explicitly requested.

*Warning*: The ^vacation^ command should always be used conditionally,
subject to at least the ^personal^ condition (see section <<SECTpersonal>>
below) so as not to send automatic replies to non-personal messages from
mailing lists or elsewhere. Sending an automatic response to a mailing list or
a mailing list manager is an Internet Sin.

For both commands, the key/value argument pairs can appear in any order. At
least one of ^text^ or ^file^ must appear (except with ^vacation^, where
there is a default for ^file^); if both are present, the text string appears
first in the message. If ^expand^ precedes ^file^, each line of the file is
subject to string expansion before it is included in the message.

Several lines of text can be supplied to ^text^ by including the escape
sequence ``\n'' in the string wherever a newline is required. If the command is
output during filter file testing, newlines in the text are shown as ``\n''.

Note that the keyword for creating a 'Reply-To:' header is ^reply_to^,
because Exim keywords may contain underscores, but not hyphens. If the ^from^
keyword is present and the given address does not match the user who owns the
forward file, Exim normally adds a 'Sender:' header to the message,
though it can be configured not to do this.

The %extra_headers% keyword allows you to add custom header lines to the
message. The text supplied must be one or more syntactically valid RFC 2882
header lines. You can use ``\n'' within quoted text to specify newlines between
headers, and also to define continued header lines. For example:

    extra_headers "h1: first\nh2: second\n continued\nh3: third"


No newline should appear at the end of the final header line.

If no ^to^ argument appears, the message is sent to the address in the
$reply_address$ variable (see section <<SECTfilterstringexpansion>> above).
An 'In-Reply-To:' header is automatically included in the created message,
giving a reference to the message identification of the incoming message.

If ^return message^ is specified, the incoming message that caused the filter
file to be run is added to the end of the message, subject to a maximum size
limitation.

If a log file is specified, a line is added to it for each message sent.

If a ^once^ file is specified, it is used to hold a database for remembering
who has received a message, and no more than one message is ever sent to any
particular address, unless ^once_repeat^ is set. This specifies a time
interval after which another copy of the message is sent. The interval is
specified as a sequence of numbers, each followed by the initial letter of one
of ``seconds'', ``minutes'', ``hours'', ``days'', or ``weeks''. For example,

    once_repeat 5d4h


causes a new message to be sent if 5 days and 4 hours have elapsed since the
last one was sent. There must be no white space in a time interval.

Commonly, the file name specified for ^once^ is used as the base name for
direct-access (DBM) file operations. There are a number of different DBM
libraries in existence. Some operating systems provide one as a default, but
even in this case a different one may have been used when building Exim. With
some DBM libraries, specifying ^once^ results in two files being created,
with the suffixes _.dir_ and _.pag_ being added to the given name. With
some others a single file with the suffix _.db_ is used, or the name is used
unchanged.

Using a DBM file for implementing the ^once^ feature means that the file
grows as large as necessary. This is not usually a problem, but some system
administrators want to put a limit on it. The facility can be configured not to
use a DBM file, but instead, to use a regular file with a maximum size. The
data in such a file is searched sequentially, and if the file fills up, the
oldest entry is deleted to make way for a new one. This means that some
correspondents may receive a second copy of the message after an unpredictable
interval. Consult your local information to see if your system is configured
this way.

More than one ^mail^ or ^vacation^ command may be obeyed in a single filter
run; they are all honoured, even when they are to the same recipient.



[[SECTlog]]
Logging commands
~~~~~~~~~~~~~~~~
A log can be kept of actions taken by a filter file. This facility is normally
available in conventional configurations, but there are some situations where
it might not be. Also, the system administrator may choose to disable it. Check
your local information if in doubt.

Logging takes place while the filter file is being interpreted. It does not
queue up for later like the delivery commands. The reason for this is so that a
log file need be opened only once for several write operations. There are two
commands, neither of which constitutes a significant delivery. The first
defines a file to which logging output is subsequently written:

  &&&
  `     logfile `<'file name'>
  `e.g. logfile $home/filter.log`
  &&&


The file name must be fully qualified. You can use $home$, as in this
example, to refer to your home directory. The file name may optionally be
followed by a mode for the file, which is used if the file has to be created.
For example,

    logfile $home/filter.log 0644


The number is interpreted as octal, even if it does not begin with a zero.
The default for the mode is 600. It is suggested that the ^logfile^ command
normally appear as the first command in a filter file. Once ^logfile^ has
been obeyed, the ^logwrite^ command can be used to write to the log file:

  &&&
  `     logwrite "`<'some text string'>`"`
  `e.g. logwrite "$tod_log $message_id processed"`
  &&&


It is possible to have more than one ^logfile^ command, to specify writing to
different log files in different circumstances. Writing takes place at the end
of the file, and a newline character is added to the end of each string if
there isn't one already there. Newlines can be put in the middle of the string
by using the ``\n'' escape sequence. Lines from simultaneous deliveries may get
interleaved in the file, as there is no interlocking, so you should plan your
logging with this in mind. However, data should not get lost.



[[SECTfinish]]
The finish command
~~~~~~~~~~~~~~~~~~
The command ^finish^, which has no arguments, causes Exim to stop
interpreting the filter file. This is not a significant action unless preceded
by ^seen^. A filter file containing only ^seen finish^ is a black hole.


[[SECTtestprint]]
The testprint command
~~~~~~~~~~~~~~~~~~~~~
It is sometimes helpful to be able to print out the values of variables when
testing filter files. The command

  &&&
  `     testprint `<'text'>
  `e.g. testprint "home=$home reply_address=$reply_address"`
  &&&


does nothing when mail is being delivered. However, when the filtering code is
being tested by means of the %-bf% option (see section <<SECTtesting>> above),
the value of the string is written to the standard output.


++++++++++++
<?hard-pagebreak?>
++++++++++++
[[SECTfail]]
The fail command
~~~~~~~~~~~~~~~~
When Exim's filtering facilities are being used as a system filter, the
^fail^ command is available, to force delivery failure. Because this command
is normally usable only by the system administrator, and not enabled for use by
ordinary users, it is described in more detail in the main Exim specification
rather than in this document.


[[SECTfreeze]]
The freeze command
~~~~~~~~~~~~~~~~~~
When Exim's filtering facilities are being used as a system filter, the
^freeze^ command is available, to freeze a message on the queue. Because this
command is normally usable only by the system administrator, and not enabled
for use by ordinary users, it is described in more detail in the main Exim
specification rather than in this document.



[[SECTheaders]]
The headers command
~~~~~~~~~~~~~~~~~~~
The ^headers^ command can be used to change the target character set that is
used when translating the contents of encoded header lines for insertion by the
$header_$ mechanism (see section <<SECTheadervariables>> above). The default
can be set in the Exim configuration; if not specified, ISO-8859-1 is used. The
only currently supported format for the ^headers^ command in user filters is as
in this example:

    headers charset "UTF-8"


That is, ^headers^ is followed by the word ^charset^ and then the name of a
character set. This particular example would be useful if you wanted to compare
the contents of a header to a UTF-8 string.

In system filter files, the ^headers^ command can be used to add or remove
header lines from the message. These features are described in the main Exim
specification.



[[SECTif]]
Obeying commands conditionally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Most of the power of filtering comes from the ability to test conditions and
obey different commands depending on the outcome. The ^if^ command is used to
specify conditional execution, and its general form is

  &&&
  `if    `<'condition'>
  `then  `<'commands'>
  `elif  `<'condition'>
  `then  `<'commands'>
  `else  `<'commands'>
  `endif`
  &&&


There may be any number of ^elif^ and ^then^ sections (including none) and
the ^else^ section is also optional. Any number of commands, including nested
^if^ commands, may appear in any of the <'commands'> sections.

Conditions can be combined by using the words ^and^ and ^or^, and round
brackets (parentheses) can be used to specify how several conditions are to
combine. Without brackets, ^and^ is more binding than ^or^.
For example,

    if
      $h_subject: contains "Make money" or
      $h_precedence: is "junk" or
      ($h_sender: matches ^\\d{8}@ and not personal) or
      $message_body contains "this is not spam"
    then
      seen finish
    endif


A condition can be preceded by ^not^ to negate it, and there are also some
negative forms of condition that are more English-like.



++++++++++++
<?hard-pagebreak?>
++++++++++++
String testing conditions
~~~~~~~~~~~~~~~~~~~~~~~~~
There are a number of conditions that operate on text strings, using the words
``begins'', ``ends'', ``is'', ``contains'' and ``matches''. If you want to apply the same
test to more than one header line, you can easily concatenate them into a
single string for testing, as in this example:

    if "$h_to:, $h_cc:" contains me@??? then ...


If a string-testing condition name is written in lower case, the testing
of letters is done without regard to case; if it is written in upper case
(for example, ``CONTAINS''), the case of letters is taken into account.

  &&&
  `     `<'text1'>` begins `<'text2'>
  `     `<'text1'>` does not begin `<'text2'>
  `e.g. $header_from: begins "Friend@"`
  &&&


A ``begins'' test checks for the presence of the second string at the start of
the first, both strings having been expanded.

  &&&
  `     `<'text1'>` ends `<'text2'>
  `     `<'text1'>` does not end `<'text2'>
  `e.g. $header_from: ends "public.com.example"`
  &&&


An ``ends'' test checks for the presence of the second string at the end of
the first, both strings having been expanded.

  &&&
  `     `<'text1'>` is `<'text2'>
  `     `<'text1'>` is not `<'text2'>
  `e.g. $local_part_suffix is "-foo"`
  &&&


An ``is'' test does an exact match between the strings, having first expanded
both strings.

  &&&
  `     `<'text1'>` contains `<'text2'>
  `     `<'text1'>` does not contain `<'text2'>
  `e.g. $header_subject: contains "evolution"`
  &&&


A ``contains'' test does a partial string match, having expanded both strings.

  &&&
  `     `<'text1'>` matches `<'text2'>
  `     `<'text1'>` does not match `<'text2'>
  `e.g. $sender_address matches "(bill|john)@"`
  &&&


For a ``matches'' test, after expansion of both strings, the second one is
interpreted as a regular expression. Exim uses the PCRE regular expression
library, which provides regular expressions that are compatible with Perl.

The match succeeds if the regular expression matches any part of the first
string. If you want a regular expression to match only at the start or end of
the subject string, you must encode that requirement explicitly, using the `^`
or `$` metacharacters. The above example, which is not so constrained, matches
all these addresses:

    bill@???
    john@???
    spoonbill@???
    littlejohn@???


To match only the first two, you could use this:

    if $sender_address matches "^(bill|john)@" then ...


Care must be taken if you need a backslash in a regular expression, because
backslashes are interpreted as escape characters both by the string expansion
code and by Exim's normal processing of strings in quotes. For example, if you
want to test the sender address for a domain ending in '.com' the regular
expression is

    \.com$


The backslash and dollar sign in that expression have to be escaped when used
in a filter command, as otherwise they would be interpreted by the expansion
code. Thus, what you actually write is

    if $sender_address matches \\.com\$


An alternative way of handling this is to make use of the `\N` expansion
flag for suppressing expansion:

    if $sender_address matches \N\.com$\N


Everything between the two occurrences of `\N` is copied without change by
the string expander (and in fact you do not need the final one, because it is
at the end of the string). If the regular expression is given in quotes
(mandatory only if it contains white space) you have to write either

    if $sender_address matches "\\\\.com\\$"


or

    if $sender_address matches "\\N\\.com$\\N"



If the regular expression contains bracketed sub-expressions, numeric
variable substitutions such as $1$ can be used in the subsequent actions
after a successful match. If the match fails, the values of the numeric
variables remain unchanged. Previous values are not restored after ^endif^.
In other words, only one set of values is ever available. If the condition
contains several sub-conditions connected by ^and^ or ^or^, it is the
strings extracted from the last successful match that are available in
subsequent actions. Numeric variables from any one sub-condition are also
available for use in subsequent sub-conditions, because string expansion of a
condition occurs just before it is tested.


Numeric testing conditions
~~~~~~~~~~~~~~~~~~~~~~~~~~
The following conditions are available for performing numerical tests:

  &&&
  `     `<'number1'>` is above `<'number2'>
  `     `<'number1'>` is not above `<'number2'>
  `     `<'number1'>` is below `<'number2'>
  `     `<'number1'>` is not below `<'number2'>
  `e.g. $message_size is not above 10k`
  &&&


The <'number'> arguments must expand to strings of digits, optionally followed
by one of the letters K or M (upper case or lower case) which cause
multiplication by 1024 and 1024x1024 respectively.


Testing for significant deliveries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the ^delivered^ condition to test whether or not any previously
obeyed filter commands have set up a significant delivery. For example:

    if not delivered then save mail/anomalous endif


``Delivered'' is perhaps a poor choice of name for this condition, because the
message has not actually been delivered; rather, a delivery has been set up for
later processing.


Testing for error messages
~~~~~~~~~~~~~~~~~~~~~~~~~~
The condition ^error_message^ is true if the incoming message is a bounce
(mail delivery error) message. Putting the command

    if error_message then finish endif


at the head of your filter file is a useful insurance against things going
wrong in such a way that you cannot receive delivery error reports. *Note*:
^error_message^ is a condition, not an expansion variable, and therefore is
not preceded by `$`.


Testing a list of addresses
~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is a facility for looping through a list of addresses and applying a
condition to each of them. It takes the form

&&&
`foranyaddress `<'string'>` (`<'condition'>`)`
&&&

where <'string'> is interpreted as a list of RFC 2822 addresses, as in a
typical header line, and <'condition'> is any valid filter condition or
combination of conditions. The ``group'' syntax that is defined for certain
header lines that contain addresses is supported.

The parentheses surrounding the condition are mandatory, to delimit it from
possible further sub-conditions of the enclosing ^if^ command. Within the
condition, the expansion variable $thisaddress$ is set to the non-comment
portion of each of the addresses in the string in turn. For example, if the
string is

    B.Simpson <bart@???>, lisa@??? (his sister)


then $thisaddress$ would take on the values `bart@???` and
`lisa@???` in turn.

If there are no valid addresses in the list, the whole condition is false. If
the internal condition is true for any one address, the overall condition is
true and the loop ends. If the internal condition is false for all addresses in
the list, the overall condition is false. This example tests for the presence
of an eight-digit local part in any address in a 'To:' header:

    if foranyaddress $h_to: ( $thisaddress matches ^\\d{8}@ ) then ...


When the overall condition is true, the value of $thisaddress$ in the
commands that follow ^then^ is the last value it took on inside the loop. At
the end of the ^if^ command, the value of $thisaddress$ is reset to what it
was before. It is best to avoid the use of multiple occurrences of
^foranyaddress^, nested or otherwise, in a single ^if^ command, if the
value of $thisaddress$ is to be used afterwards, because it isn't always
clear what the value will be. Nested ^if^ commands should be used instead.

Header lines can be joined together if a check is to be applied to more than
one of them. For example:

    if foranyaddress $h_to:,$h_cc: ....


scans through the addresses in both the 'To:' and the 'Cc:' headers.


[[SECTpersonal]]
Testing for personal mail
~~~~~~~~~~~~~~~~~~~~~~~~~
A common requirement is to distinguish between incoming personal mail and mail
from a mailing list, or from a robot or other automatic process (for example, a
bounce message). In particular, this test is normally required for ``vacation
messages''.

The ^personal^ condition checks that the message is not a bounce message and
that the current user's email address appears in the 'To:' header. It also
checks that the sender is not the current user or one of a number of common
daemons, and that there are no header lines starting 'List-' in the message.
Finally, it checks the content of the 'Precedence:' header line, if there is
one.

You should always use the ^personal^ condition when generating automatic
responses. This example shows the use of ^personal^ in a filter file that is
sending out vacation messages:

    if personal then
      mail to $reply_address
      subject "I am on holiday"
      file $home/vacation/message
      once $home/vacation/once
      once_repeat 10d
    endif


It is tempting, when writing commands like the above, to quote the original
subject in the reply. For example:

    subject "Re: $h_subject:"


There is a danger in doing this, however. It may allow a third party to
subscribe you to an opt-in mailing list, provided that the list accepts bounce
messages as subscription confirmations. (Messages sent from filters are always
sent as bounce messages.) Well-managed lists require a non-bounce message to
confirm a subscription, so the danger is relatively small.

If prefixes or suffixes are in use for local parts -- something which depends
on the configuration of Exim (see section <<SECTmbox>> below) -- the tests for
the current user are done with the full address (including the prefix and
suffix, if any) as well as with the prefix and suffix removed. If the system is
configured to rewrite local parts of mail addresses, for example, to rewrite
`dag46` as `Dirk.Gently`, the rewritten form of the address is also used in
the tests.



Alias addresses for the personal condition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is quite common for people who have mail accounts on a number of different
systems to forward all their mail to one system, and in this case a check for
personal mail should test all their various mail addresses. To allow for this,
the ^personal^ condition keyword can be followed by

&&&
`alias `<'address'>
&&&

any number of times, for example

    if personal alias smith@???
                alias jones@???
    then ...


The alias addresses are treated as alternatives to the current user's email
address when testing the contents of header lines.


Details of the personal condition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The basic ^personal^ test is roughly equivalent to the following:

    not error_message and
    $message_headers does not contain "\nList-" and
    $header_auto-submitted: does not contain "auto-" and
    $header_precedence: does not contain "bulk" and
    $header_precedence: does not contain "list" and
    $header_precedence: does not contain "junk" and
    foranyaddress $header_to:
      ( $thisaddress contains "$local_part$domain" ) and
    not foranyaddress $header_from:
      (
      $thisaddress contains "$local_partdomain" or
      $thisaddress contains "server" or
      $thisaddress contains "daemon" or
      $thisaddress contains "root" or
      $thisaddress contains "listserv" or
      $thisaddress contains "majordomo" or
      $thisaddress contains "-request" or
      $thisaddress matches  "^owner-[^]+"
      )


The variable $local_part$ contains the local part of the mail address of
the user whose filter file is being run -- it is normally your login id. The
$domain$ variable contains the mail domain. As explained above, if aliases
or rewriting are defined, or if prefixes or suffixes are in use, the tests for
the current user are also done with alternative addresses.




Testing delivery status
~~~~~~~~~~~~~~~~~~~~~~~
There are two conditions that are intended mainly for use in system filter
files, but which are available in users' filter files as well. The condition
^first_delivery^ is true if this is the first process that is attempting to
deliver the message, and false otherwise. This indicator is not reset until the
first delivery process successfully terminates; if there is a crash or a power
failure (for example), the next delivery attempt is also a ``first delivery''.

In a user filter file ^first_delivery^ will be false if there was previously an
error in the filter, or if a delivery for the user failed owing to, for
example, a quota error, or if forwarding to a remote address was deferred for
some reason.

The condition ^manually_thawed^ is true if the message was ``frozen'' for some
reason, and was subsequently released by the system administrator. It is
unlikely to be of use in users' filter files.


[[SECTmbox]]
Multiple personal mailboxes
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The system administrator can configure Exim so that users can set up variants
on their email addresses and handle them separately. Consult your system
administrator or local documentation to see if this facility is enabled on your
system, and if so, what the details are.

The facility involves the use of a prefix or a suffix on an email address. For
example, all mail addressed to 'lg303-'<'something'> would be the property of
user 'lg303', who could determine how it was to be handled, depending on the
value of <'something'>.

There are two possible ways in which this can be set up. The first possibility
is the use of multiple '.forward' files. In this case, mail to 'lg303-foo',
for example, is handled by looking for a file called _.forward-foo_ in
'lg303'{ap}s home directory. If such a file does not exist, delivery fails and the
message is returned to its sender.

The alternative approach is to pass all messages through a single _.forward_
file, which must be a filter file so that it can distinguish between the
different cases by referencing the variables $local_part_prefix$ or
$local_part_suffix$, as in the final example in section <<SECTex>> below.

It is possible to configure Exim to support both schemes at once. In this case,
a specific _.forward-foo_ file is first sought; if it is not found, the basic
_.forward_ file is used.

The ^personal^ test (see section <<SECTpersonal>>) includes prefixes and
suffixes in its checking.



Ignoring delivery errors
~~~~~~~~~~~~~~~~~~~~~~~~
As was explained above, filtering just sets up addresses for delivery -- no
deliveries are actually done while a filter file is active. If any of the
generated addresses subsequently suffers a delivery failure, an error message
is generated in the normal way. However, if a filter command that sets up a
delivery is preceded by the word ^noerror^, errors for that delivery,
'and any deliveries consequent on it' (that is, from alias, forwarding, or
filter files it invokes) are ignored.



[[SECTex]]
Examples of Exim filter commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Simple forwarding:

    # Exim filter
    deliver baggins@???


Vacation handling using traditional means, assuming that the _.vacation.msg_
and other files have been set up in your home directory:

    # Exim filter
    unseen pipe "/usr/ucb/vacation \"$local_part\""


Vacation handling inside Exim, having first created a file called
_.vacation.msg_ in your home directory:

    # Exim filter
    if personal then vacation endif


File some messages by subject:

    # Exim filter
    if $header_subject: contains "empire" or
       $header_subject: contains "foundation"
    then
       save $home/mail/f+e
    endif


Save all non-urgent messages by weekday:

    # Exim filter
    if $header_subject: does not contain "urgent" and
       $tod_full matches "^(...),"
    then
      save $home/mail/$1
    endif


Throw away all mail from one site, except from postmaster:

    # Exim filter
    if $reply_address contains "@spam.site.example" and
       $reply_address does not contain "postmaster@"
    then
       seen finish
    endif


Handle multiple personal mailboxes

    # Exim filter
    if $local_part_suffix is "-foo"
    then
      save $home/mail/foo
    elif $local_part_suffix is "-bar"
    then
      save $home/mail/bar
    endif




File/diff for spec.ascd is too large (1382929 bytes > 1024000 bytes)!

Index: x2man
====================================================================
#! /usr/bin/perl -w

# $Cambridge: exim/exim-doc/doc-docbook/x2man,v 1.1 2005/06/16 10:32:31 ph10 Exp $

# Script to find the command line options in the DocBook source of the Exim
# spec, and turn them into a man page, because people like that.




  ##################################################
  #              Main Program                      #
  ##################################################


open(IN, "spec.xml") || die "Can't open spec.xml\n";
open(OUT, ">exim.8" ) || die "Can't open exim.8\n";

print OUT <<End;
.TH EXIM 8
.SH NAME
exim \\- a Mail Transfer Agent
.SH SYNOPSIS
.B exim [options] arguments ...
.br
.B mailq [options] arguments ...
.br
.B rsmtp [options] arguments ...
.br
.B rmail [options] arguments ...
.br
.B runq [options] arguments ...
.br
.B newaliases [options] arguments ...

.SH DESCRIPTION
.rs
.sp
Exim is a mail transfer agent (MTA) developed at the University of Cambridge.
It is a large program with very many facilities. For a full specification, see
the reference manual. This man page contains only a description of the command
line options. It has been automatically generated from the reference manual
source, hopefully without too much mangling.

.SH SETTING OPTIONS BY PROGRAM NAME
.rs
.TP 10
\\fBmailq\\fR
Behave as if the option \\fB\\-bp\\fP were present before any other options.
The \\fB\\-bp\\fP option requests a listing of the contents of the mail queue
on the standard output.
.TP
\\fBrsmtp\\fR
Behaves as if the option \\fB\\-bS\\fP were present before any other options,
for compatibility with Smail. The \\fB\\-bS\\fP option is used for reading in a
number of messages in batched SMTP format.
.TP
\\fBrmail\\fR
Behave as if the \\fB\\-i\\fP and \\fB\\-oee\\fP options were present before
any other options, for compatibility with Smail. The name \\fBrmail\\fR is used
as an interface by some UUCP systems. The \\fB\\-i\\fP option specifies that a
dot on a line by itself does not terminate a non\\-SMTP message; \\fB\\-oee\\fP
requests that errors detected in non\\-SMTP messages be reported by emailing
the sender.
.TP
\\fBrunq\\fR
Behave as if the option \\fB\\-q\\fP were present before any other options, for
compatibility with Smail. The \\fB\\-q\\fP option causes a single queue runner
process to be started. It processes the queue once, then exits.
.TP
\\fBnewaliases\\fR
Behave as if the option \\fB\\-bi\\fP were present before any other options,
for compatibility with Sendmail. This option is used for rebuilding Sendmail's
alias file. Exim does not have the concept of a single alias file, but can be
configured to run a specified command if called with the \\fB\\-bi\\fP option.

.SH OPTIONS
.rs
End

while (<IN>) { last if /^<!-- === Start of command line options === -->\s*$/; }
die "Can't find start of options\n" if ! defined $_;

$optstart = 0;
$indent = "";

# Loop for each individual option

$next = <IN>;

  while ($next)
    {
    $_ = $next;
    $next = <IN>;


    last if /^<!-- === End of command line options === -->\s*$/;


    # Start of new option


    if (/^<term>$/)
      {
      print OUT ".TP 10\n";
      $optstart = 1;
      next;
      }


    # If a line contains text that is not in <>, read subsequent lines of the
    # same form, so that we get whole sentences for matching on references.


    if (/^ (?> (<[^>]+>)* ) \s*\S/x)
      {
      while ($next =~ /^ (?> (<[^>]+>)* ) \s*\S/x)
        {
        $_ .= $next;
        $next = <IN>;
        }
      }


    # Remove sentences or parenthetical comments that refer to chapters or
    # sections. The order of these changes is very important:
    #
    # (1) Remove any parenthetical comments first.
    # (2) Then remove any sentences that start after a full stop.
    # (3) Then remove any sentences that start at the beginning or a newline.


    s/\s?\(  [^()]+ <xref \s linkend="[^"]+" \/ > \)//xg;
    s/\s?\.  [^.]+ <xref \s linkend="[^"]+" \/ > [^.]*? \././xg;
    s/(^|\n) [^.]+ <xref \s linkend="[^"]+" \/ > [^.]*? \./$1/xg;


    # Handle paragraph starts; skip the first one encountered for an option


    if ($optstart && /<(sim)?para>/)
      {
      s/<(sim)?para>//;
      $optstart = 0;
      }


    # Literal layout needs to be treated as a paragraph, and indented


    if (/<literallayout/)
      {
      s/<literallayout[^>]*>/.P/;
      $indent = "  ";
      }


    $indent = "" if (/<\/literallayout>/);


    # Others get marked


    s/<para>/.P/;
    s/<simpara>/.P/;


    # Skip index entries


    s/<primary>.*?<\/primary>//g;
    s/<secondary>.*?<\/secondary>//g;


    # Convert all occurrences of backslash into \e


    s/\\/\\e/g;


    # Handle bold and italic


    s/<emphasis>/\\fI/g;
    s/<emphasis role="bold">/\\fB/g;
    s/<\/emphasis>/\\fP/g;


    s/<option>/\\fB/g;
    s/<\/option>/\\fP/g;


    s/<varname>/\\fI/g;
    s/<\/varname>/\\fP/g;


    # Handle quotes


    s/<\/?quote>/"/g;


    # Remove any remaining XML markup


    s/<[^>]*>//g;


    # If nothing left in the line, ignore.


    next if /^\s*$/;


    # It turns out that we don't actually want .P; a blank line is needed.
    # But we can't set that above, because it will be discarded.


    s/^\.P\s*$/\n/;


    # We are going to output some data; sort out special characters


    s/&lt;/</g;
    s/&gt;/>/g;


    s/&#x002d;/-/g;
    s/&#x00a0;/ /g;
    s/&#x2013;/-/g;
    s/&#x2019;/'/g;
    s/&#8230;/.../g;    # Sic - no x


    # Escape hyphens to prevent unwanted hyphenation


    s/-/\\-/g;


    # Put in the indent, and write the line


    s/^/$indent/mg;


    print OUT;
    }


# End of g2man

  Index: ABOUT
  ===================================================================
  RCS file: /home/cvs/exim/exim-doc/doc-scripts/ABOUT,v
  retrieving revision 1.1
  retrieving revision 1.2
  diff -u -r1.1 -r1.2
  --- ABOUT    8 Oct 2004 10:38:48 -0000    1.1
  +++ ABOUT    16 Jun 2005 10:32:31 -0000    1.2
  @@ -1,9 +1,12 @@
  -$Cambridge: exim/exim-doc/doc-scripts/ABOUT,v 1.1 2004/10/08 10:38:48 ph10 Exp $
  +$Cambridge: exim/exim-doc/doc-scripts/ABOUT,v 1.2 2005/06/16 10:32:31 ph10 Exp $


CVS directory exim/exim-doc/doc-scripts
---------------------------------------

-This directory contains various scripts that are used to build the distributed
-documentation from its source files.
+This directory contains various scripts that are used to build the distributed
+documentation from its SGCAL source files. This method of maintaining the
+documentation was used up to and including release 4.50 of Exim. There is also
+a script for building the FAQ from its source. This is still (June 2005)
+current, but may be superseded in due course.

End

  Index: ABOUT
  ===================================================================
  RCS file: /home/cvs/exim/exim-doc/doc-src/ABOUT,v
  retrieving revision 1.1
  retrieving revision 1.2
  diff -u -r1.1 -r1.2
  --- ABOUT    8 Oct 2004 10:38:48 -0000    1.1
  +++ ABOUT    16 Jun 2005 10:32:31 -0000    1.2
  @@ -1,11 +1,14 @@
  -$Cambridge: exim/exim-doc/doc-src/ABOUT,v 1.1 2004/10/08 10:38:48 ph10 Exp $
  +$Cambridge: exim/exim-doc/doc-src/ABOUT,v 1.2 2005/06/16 10:32:31 ph10 Exp $


CVS directory exim/exim-doc/doc-src
-----------------------------------

-This directory contains documentation files that are processed in some way in
-order to make the documentation files that form part of Exim distributions. A
-non-standard document processor is currently in use (October 2004), but in the
-long term something more standard will have to take over.
+This directory contains documentation files that are processed in some way in
+order to make the documentation files that form part of Exim distributions. A
+non-standard document processor (SGCAL) was used up to and including release
+4.50 of Exim to process the sources for the manual and filter docuement.
+Subsequent documentation releases operate using DocBook input, so these files
+are now historical relics. The FAQ source is still (June 2005) current, but may
+be superseded in due course.

End