[Dub-pm] big data structures relative to memory size

Sean O'Riordain seanpor at acm.org
Fri Apr 16 09:08:23 CDT 2004


Hi folks,

I've an analysis program with a couple of million records that i really 
need to keep in memory as i need to scan back and forth etc... With 5 
million odd records (written as a couple of independent 'arrays' or 
should i say 'lists') the program requires quite a bit more than the 
1.5Gb of ram and becomes very slow due to swapping - gentoo-linux... 
Each record has 5 integers and a string of max.len 30 chars... but perl 
takes up extra ram for each SV...  I would like to be able to handle 
larger datasets much faster than currently...

Has anybody used INLINE::C for handling large data structures - if so 
how do you load the info?

Anybody used PDL?

Any thoughts which way I should jump?

cheers,
Sean



More information about the Dublin-pm mailing list