sam at vilain.net
Thu Sep 21 16:54:22 PDT 2006
Peter C. Kelly wrote:
> Firstly, I miss Catalyst and a functioning Perl-Mongers group terribly
> now I've moved to Perth, and frankly wish I could have my sunshine and
> companions simultaneously.
> Today I found myself wanting to squeeze down a data structure (a set of
> offer curves) into just the unique ones. The offer curves themselves
> are a fairly complex data structure.
> The way that I did this was not to use Data::Compare (no good reason
> why) but instead to create a hash where the key was the curve
> (serialised using Data::Dumper), and the values were an array of the
> original hash keys to get them.
> Do you think there would be any utility in a Data::Distinct module that
> looked for duplicates at a certain level (or levels) into a data
> structure and provided a distinct version?
> Or would it be better to just compare them all to each other using
> Data::Compare? Is it reasonable to think that it would be more
> efficient to use a hash, or would the /eval/ing to get them out again
> outweigh the exponential nature of one-to-one comparisons?
Having a hash from something like : md5_hex(Dump $structure) to the
actual structure might be a little more reliable; I don't think
Data::Dumper is necessarily guaranteed to produce stable output.
(md5_hex from Digest::SHA1, Dump from YAML::Syck).
More information about the Wellington-pm