Re: [exim] Tools for SQL export to CDB

Top Page
Delete this message
Reply to this message
Author: W B Hacker
Date:  
To: exim users
Subject: Re: [exim] Tools for SQL export to CDB
Jeff Garzik wrote:

> David Saez Padros wrote:
>
>> we use here a cdb database for white/black listing that is rebuilt every
>> 5 minutes from a mysql database (with more than 4 million ip addresses),
>> in our case the cdb read speed compensates the databse rebuild every 5
>> minutes. Of course this maybe even better using some dbm like database
>> but we don't have tried it yet. For other purposes where data is updated
>> from time to time is even better (username/passwords, etc ...)
>
>
> I agree cdb is a nice solution, but I would love to see a comparison
> with SQlite.
>
> Replacing the cdb database every five minutes seems like it would
> destroy the kernel's ability to cache data.
>
>     Jeff

>
>
>

Muhhh .... 'the kernel' isn't what caches the data used here...
but never mind ...

No external client/process can safely make an assumption about
the 'currency' of data drawn from a DB engine, be it SQLite, DB2
or even (especially) IMS.

One must ask the 'engine'. Always.

It is the very nature of such animals to be altering data
elsewhere while your process is off having lunch.

Special tools exist to alleviate that, most specifcally in
Oracle, DB2, and similar 'industrial strength' DBMS
designed/modified for distributed/remote clustering.

See SQL-Relay for a PostgreSQL example. And note the challenges
in getting it to work. Likewise Oracle's debacle when they
rolled theirs out a year or so ago.

In that respect, the CDB is easier to 'vet' than *any* 'Engine'
based DB, as all you need to do is check the file's timestamp.
- and some OS have tools that track selected files for mods.

As to timing - well 'five minutes' is an archaeological epoch
for a CPU, or even a storage device.

Caches are updated and flushed in timespans of pico, nano, micro
or milli seconds, depending on how close they are to the silicon...

Bill