POD for DBD::RAM version 0.03 (fwd)
corliss at alaskapm.org
corliss at alaskapm.org
Thu Mar 9 10:11:30 CST 2000
Any DBI users out there? This guy's got a neat module under construction for
modeling data-layers w/o an RDBMS. Depending on how efficient it is, I could
see a lot of other uses for it, too. . .
--Arthur Corliss
Perl Monger/Alaska Perl Mongers
http://www.alaskapm.org/
---------- Forwarded message ----------
Date: 9 Mar 2000 06:13:22 -0800
From: Jeff Zucker <jeff at vpservices.com>
To: dbi-users <dbi-users at isc.org>
Subject: POD for DBD::RAM version 0.03
*** From dbi-users - To unsubscribe, see the end of this message. ***
*** DBI Home Page - http://www.symbolstone.org/technology/perl/DBI/ ***
As requested by Tim, here's the whole current POD. Comments, gripes,
and suggestions eagerly solicited.
--
Jeff
=head1 NAME
DBD::RAM - a DBI driver for in-memory data structures
=head1 SYNOPSIS
use DBI;
my $dbh = DBI->connect( 'DBI:RAM:','','',{RaiseError=>1} );
$dbh->func(
{ data_type => 'array',
table_name => 'phrases',
col_names => 'id,phrase',
},
[
[1,'Hello new world!'],
[2,'Junkity Junkity Junk'],
],
'import' );
my $sth = $dbh->prepare( "SELECT phrase FROM foo WHERE id < ?" );
$sth->execute(2);
while (my @row = $sth->fetchrow_array) {
for (@row) { print "$_\n" if $_; }
}
See also, below for creating tables from fixed-width records, from
name=value records, from hashes, from any DBI accessible database,
and from other user-defined data structures.
All syntax supported by SQL::Statement and all methods supported
by DBD::CSV are also supported, see their documentation for details.
=head1 DESCRIPTION
The DBD::RAM module allows you to import almost any type of Perl data
structure into an in-memory table and then use DBI and SQL to access
and modify it. Currently the following types of data are supported:
'array' imports an array of arrayrefs
'hash' imports an array of hashrefs
'name=value' imports an array of name=value strings
'fixed-width' imports an array of fixed-width record strings
'sth' imports a statement handle from any other DBI database
'user' imports an array of user-defined data structures
With a data type of 'user', you can pass a pointer to a subroutine
that parses the data, thus making the module very extendable.
DBD::RAM allows you to prototype a database without having an rdbms
system
or other database engine and without creating or reading any
disk files.
The module is based on Jochen Wiedmann's SQL::Statement and
DBD::File modules and behaves exactly like DBD::CSV without the
file reading and writing operations.
=head1 WARNING
This module is in a rapid development phase and it is likely to change
quite often for the next few days/weeks. I will try to keep the
interface as stable as possible, but if you are going to start using
this
in places where it will be difficult to modify, you might want to ask
me
about the stabilitiy of the features you are using.
=head1 INSTALLATION & PREREQUISITES
This module should work on any any platform that DBI works on.
You don't need an external SQL engine or a running server, or a
compiler.
All you need are Perl itself and installed versions of DBI and
SQL::Statement. If you do *not* also have DBD::CSV installed you will
need to either install it, or simply copy File.pm into your DBD
directory.
For this first release, there is no makefile, just copy RAM.pm
into your DBD direcotry.
=head1 CREATING A DATABASE
In-memory tables may be created using standard CREATE/INSERT
statements,
or using the DBD::RAM specific import mehtod:
$dbh->func( $spec, $data, 'import' );
The $spec parameter is a hashref containg:
table_name a string holding the name of the table
col_names a string containing the column names separated by
commas
data_type one of: array, hash, etc. see below for full list
pattern a string containing an unpack pattern (fixed-width
only)
parser a pointer to a parsing subroutine (user only)
The $data parameter is a an arrayref containing an array of the type
specified in the $spec{data_type} parameter holding the actual table
data.
Data types for the data_type parameter currently include: array, hash,
fixed-width, name=value, sth, and user.
See below for examples.
=head2 FROM AN ARRAY OF ARRAYS
$dbh->func(
{
data_type => 'array_of_arrays',
table_name => 'phrases',
col_names => 'id,phrase',
},
[
[1,'Hello new world!'],
[2,'Junkity Junkity Junk'],
],
'import' );
=head2 FROM AN ARRAY OF HASHES
$dbh->func(
{ table_name => 'phrases',
col_names => 'id,phrase',
data_type => 'hash',
},
[
{id=>1,phrase=>'Hello new world!'},
{id=>2,phrase=>'Junkity Junkity Junk'},
],
'import' );
=head2 FROM AN ARRAY OF NAME=VALUE STRINGS
$dbh->func(
{ table_name => 'phrases', # ARRAY OF NAME=VALUE PAIRS
col_names => 'id,phrase',
data_type => 'name=value',
},
[
'1=2Hello new world!',
'2=Junkity Junkity Junk',
],
'import' );
=head2 FROM AN ARRAY OF FIXED-WIDTH RECORDS
$dbh->func(
{ table_name => 'phrases',
col_names => 'id,phrase',
data_type => 'fixed-width',
pattern => 'a1 a20',
},
[
'1Hello new world! ',
'2Junkity Junkity Junk',
],
'import' );
The $spec{pattern} value should be a string describing the fixed-width
record. See the Perl documentation on "unpack()" for details.
=head2 FROM ANOTHER DBI DATABASE
You can import information from any other DBI accessible database with
the data_type set to 'sth' in the import() method. First connect to
the
other database via DBI and get a database handle for it separate from
the
database handle for DBD::RAM. Then do a prepare and execute to get a
statement handle for a SELECT statement into that database. Then pass
the
statement handle to the DBD::RAM import() method which will perform the
fetch and insert the fetched fields and records into the DBD::RAM
table.
After the import() statement, you can then close the database
connection
to the other database.
Here's an example using DBD::CSV --
my $dbh_csv = DBI->connect('DBI:CSV:','','',{RaiseError=>1});
my $sth_csv = $dbh_csv->prepare("SELECT * FROM mytest_db");
$sth_csv->execute();
$dbh->func(
{ table_name => 'phrases',
col_names => 'id,phrase',
data_type => 'sth',
},
[$sth_csv],
'import'
);
$dbh_csv->disconnect();
=head2 FROM USER-DEFINED DATA STRUCTURES
$dbh->func(
{ table_name => 'phrases', # USER DEFINED STRUCTURE
col_names => 'id,phrase',
data_type => 'user',
parser => sub { split /=/,shift },
},
[
'1=Hello new world!',
'2=Junkity Junkity Junk',
],
'import' );
This example shows a way to implement a simple name=value parser.
The subroutine can be as complex as you like however and could, for
example, call XML or HTML or other parsers, or do any kind of fetches
or massaging of data (e.g. put in some LWP calls to websites as part
of the data massaging). [Note: the actual name=value implementation
in the DBD uses a slightly more complex regex to be able to handle
equal
signs in the value.]
The parsing subroutine must accept a row of data in the user-defined
format and return it as an array. Basically, the import() method
will cycle through the array of data, and for each element in its
array, it will send that element to your parser subroutine. The
parser subroutine should accept an element in that format and return
an array with the elements of the array in the same order as the
column names you specified in the import() statement. In the example
above, the sub accepts a string and returns and array.
PLEASE NOTE: If you develop generally useful parser routines that
others
might also be able to use, send them to me and I can encorporate them
into the DBD itself.
=head2 FROM SQL STATEMENTS
You may also create tables with standard SQL syntax using CREATE TABLE
and
INSERT statements. Or you can create a table with the import method
and
later populate it using INSERT statements. Howver the table is
created, it
can be modified and accessed with all SQL syntax supported by
SQL::Statement.
=head1 USING MULTIPLE TABLES
A single script can create as many tables as your RAM will support and
you
can have multiple statement handles open to the tables simultaneously.
This
allows you to simulate joins and multi-table operations by iterating
over
several statement handles at once.
=head1 TO DO
Lots of stuff. For now, I think the main thing I need to puzzle out is
an export() method -- dumping the data back into files and/or other
databases.
Make some defaults for import() so it will add a table name and column
names
if none supplied and so it can quickly import from <DATA> after a
script
__END__ for really fast and dirty prototyping.
Let me know what else...
=head1 AUTHOR
Jeff Zucker <jeff at vpservices.com>
Copyright (c) 2000 Jeff Zucker. All rights reserved. This program
is
free software; you can redistribute it and/or modify it under the
same
terms as Perl itself as specified in the Perl README file.
Portions copyright (C), 1998 by Jochen Wiedmann
This is alpha software, no warranty of any kind is implied.
=head1 SEE ALSO
DBI, DBD::CSV, SQL::Statement
=cut
------------------------------------------------------------------------------
To unsubscribe from this list, please visit: http://www.isc.org/dbi-lists.html
If you are without web access, or if you are having trouble with the web page,
please send mail to dbi-users-request at isc.org with the subject line of
'unsubscribe'.
------------------------------------------------------------------------------
=================================================
Mailing list info: If at any time you wish to (un|re)subscribe to
the list send the request to majordomo at hfb.pm.org. All requests
should be in the body, and look like such
subscribe anchorage-pm-list
unsubscribe anchorage-pm-list
More information about the Anchorage-pm
mailing list