Anyone doing Perl/XML?

Scott Walters phaedrus at
Fri Mar 9 15:04:05 CST 2001


Planning on writting a minimal XML writter and perhaps a reader that somehow (don't ask me how)
maps the data to a good relational database (ie, first 4 normals).

I haven't put too much thought into it yet (except figuring out how im going to get out of
having to do it which has so far been unsuccessful), except that I can assume the tables 
will follow the tree structure of the data like this, for example: the first 3 layers of
depth will go to one table; the next 5 (lets say) layers of depth will go to another table,
that the first table relates to; the next 2 layers of depth (for example) will go to a third
table, that relates to the second table. As given attributes at a given depth change,
relational keys change. 

This is an extention of a technique of report generation I use elsewhere, where as different
rows in the output of a query change, data is aggreated (ie, new column or row in a chart), 
new headers are inserted, etc, etc. This has proved a great way to abstract the details of
reporting on data from arbitrary queries.

What I'm interested to know is:

0) has this been done already, or tried and prove impractical?
a) is anyone else interested in this?
I) does anyone know any good sauces that go with blueberry pasta? i seem to have blueberry pasta...
x) does the event driven model and datastructure driven models of most XML parsers seem to be
   the wrong approach to anyone else?

>           sysread HANDLE,my $slurp,-s HANDLE;
Rob, have you benchmarked my $slurp = `cat $fn` ? i wanna know =) And how do DSPs differ
from traditional processors? What makes a DSP a DSP? I'm curious =)

Thanks again, Doug, for hosting another worship session for us miscreants =)



On Fri, 9 Mar 2001, Svirskas Rob-ERS007 wrote:

> Doug;
>      I'm pretty new to the XML stuff, so probably can't help much there (I've played with XML::Writer to generate XML once or twice, haven't tried Grove).
> In a totally unrelated subject, last night in the midst of your lesson on Camel anatomy, we had briefly discussed slurping a file (I think while we were on the subject of $/). A while back, I started using sysread, as in:
>           sysread HANDLE,my $slurp,-s HANDLE;
> I suppose there could be some platform issues 'cause of the "-s", but it works fine on Solaris :-). The sysread runs faster and sucks down less CPU. Here's a benchmark for 1000 slurps of a 12 MB file:
> Benchmark: timing 1000 iterations of do loop, sysread...
>         do: 201 wallclock secs (108.34 usr + 90.51 sys = 198.85 CPU)
>    sysread: 63 wallclock secs ( 0.04 usr + 63.39 sys = 63.43 CPU)
> Here's the "do" I compared it against:
>           my $slurp = do { local $/; <HANDLE>; };
>                                                 - Rob
> -----Original Message-----
> From: doug.miles at [mailto:doug.miles at]
> Sent: Friday, March 09, 2001 3:05 PM
> To:
> Subject: Anyone doing Perl/XML?
> Anyone out there doing Perl/XML stuff?  I need to be able to convert
> different data sources to XML.  I think that XML::Grove looks like my
> best bet, but the documentation and examples are geared more towards
> parsing rather than XML generation.  Any comments or suggestions?
> -- 
> - Doug
> Encrypted with ROT-26 - all attempts to decrypt are illegal under the

More information about the Phoenix-pm mailing list