Re: [exim] Tools for SQL export to CDB

トップ ページ
このメッセージを削除
このメッセージに返信
著者: W B Hacker
日付:  
To: exim users
題目: Re: [exim] Tools for SQL export to CDB
W B Hacker wrote:

> Folks,
>
> Does anyone have experience with / a recommendation for..
>
> .. tools or settings for exporting records from an SQL DB to
> a CDB? (or similar Berkeley or GDB, even?)
>

*trim*


At least with TinyCDB, it turns out to be dirt-simple:


Within SQL: [1]

- choose a 'raw' format, no headers, just a space as delimiter


- redirect output to a file, let's call it: raw_file
(or pipe [2])

(PostgreSQL has more than one way to do the above...)


- perform the select:

select (the field for key), (the field with value);

add an appropriate 'where' clause, if need be...

An 'order by' clause is probably superfluous [3]

- close the output file


Externally, [1, 2] :


cdb -c -m named_db raw_file

.... paths as appropriate


The '-m' tells TinyCDB to expect a 'map' format input file
instead of a CDB input format.

- something like this worked:

username1 password1
username2 password2
.
.
.
usernameN passwordN

Wherein there is simply a space between the two fields, and each
successive pair is on a new line.


CLI query of the data is by:

cdb -q named_db usernameN

(optionally a '-m' to put muliple returns on newlines)

- which returns:

passwordN

(Tested)


Migration caveat:

For every selected element PostgreSQL makes against one
attribute or field within a record or table, CDB must specify a
separate cdb file. One might name them accordingly.


Still seeking:

- pointers to sample syntax for complex queries, concatenation
of returned data, etc. Exim manual very sparse on cdb.

- Headspace check ;-)


Best,

Bill


[1] No need for the full pages of perl, php, etc. that a Google
has turned up, as the plan is to store SQL selects internally to
PostgreSQL, set a trigger to fire the relevant one whenever the
keys or value it works with are changed.

[2] TO DO: The miniscule scraps of shell needed can probably
either be integrated with PG, or set up as named pipes such that
the PostgreSQL stored procedure directs the output through them
and right into their working environment.

At this level of simplicty, I see no compelling reason to
utilize any other 'language'.

[3] hashed indices generally work best with a decent helping of
randomization.