Auteur: William Thompson Date: À: Philip Hazel CC: exim-users Sujet: Re: [Exim] Exim & MySQL
> > What happens if I do: > > hosts = ${lookup mysql .... }
> >
> > and the lookup returns say 10,000 rows (also assuming \n is a seperator like
> > : is). How would exim handle this? I'm sure on systems with loads of
> > memory it wouldn't be a problem.
>
> If the lookup returns 10,000 rows, Exim will try to build a humungously
> long string. It wasn't designed with that in mind. I figured that people
> should be able to build more selection into their SQL statements....
>
> Let me review your problem. You want
>
> hosts = something
In the 10,000 rows, it is a list of hosts and netblocks. The check would
actually be done in rcpt acl because those hosts I will allow to email a
specific address if they're wrongly blocked.
> in a MAIL ACL, where "something" depends on the envelope sender domain.
> Is that right? (I'm not sure this is all that sensible, given that
> senders are trivially forged, but let's ignore that for now.) You can't
> say "domains =" because that refers to a recipient domain, and there
> isn't a "sender_domains =" condition in an ACL. If there were, you could
> say
>
> sender_domains = mysql;...
>
> and it would all be easy. But that isn't available, so you have to do it
> the hard way.
Yes, that would be easy. I can do that. I'm thinking of things where I
want to move flat files or lsearched files into sql.
I have successfully moved my aliases and domain aliases into sql that used
to use the lsearch.
> Let's suppose there are 10,000 hosts in your list. My recommendation
> would be to find a way of rewriting the SQL query so that it returned
> just one host, namely, the host in $sender_host_name, or better still,
> the value of $sender_host_address so that the comparison can be done
> quicker. After all, Exim will do a linear search on 10,000 items,
> whereas databases have indexes and things to speed them up, don't they?
You're right here. Some things like the local_domain list might not have the
information available to check against or may require looking at several
variables. In my case, since this only hosts a few domains, returning all
isn't a performance hit. I can see how tons can be.
> Can you not write an SQL query to do that? As I say, I'm no SQL expert,
> but presumably you can write a query that extracts your 10,000 records
> into a temporary table and then searches those for the matching one and
> returns just one value? I'm afraid I don't have time to dig out my MySQL
> book to give more detail than that.
I've done a bit with SQL since this project I've been working on, but I
don't know everything about it. I was considering writing a perl lsearch
that would make data returned from a DB a flat file and search it.
Don't bother digging it out.
> > Just off the top of my head, is there an easy way (or possible) to convert
> > an @@lsearch* into an sql db?
>
> Pass. My SQL experience isn't up to that off the top of my head.
You're the exim guru, not mysql =)
I'll let someone else who knows enough about it and sql to help me.
> > If I could have a lookup that uses another lookup, IMO, I don't think that'd
> > be a problem (unless reading all rows is).
>
> You can use ${expand to re-expand, and you can use ${extract to extract
> data, but I really, really, would not advise returning 10,000 rows, for
> performance reasons.
I'm not sure about this one. I may not have said it correctly.
Something like that in which would use the lsearch facility to search the
rows (like lines of a file) returned by the sql statement.
The biggest thing is I wanted to move all my files I had into sql. the net-
thing wasn't good enough for me since it only searches a single netmask.
This is where the whole idea came from.
I don't think this is possible, but I'll ask anyway. Is there anyway in the
perl facility to allow perl to use currently cached sql connections? I
suppose I could use the expand function, but that'd be one long string.