SPUG: general hash performance questions

Benjamin Franks benjamin at dzhan.com
Wed Feb 27 17:25:14 CST 2002

I was just messing about with some test code to gauge execution times and
came up with 2 general questions:

(1)  While iterating over the keys of a large hash, I notice speed
increase if I do the following:

	foreach (keys %hash) {

instead of doing:

	foreach (keys %hash) {
		if (exists $hash{$_}) {$hash{$_}++;}
		else {$hash{$_}=1;}

This intuitively makes sense because I'm cutting out the conditional
statement and test for existence.  But why does it work?  If I increment a
non-existent hash/key value, how does it get to (correctly) 1?

(2)  Let's say I build a hash of 1 million keys, each having a value of 1.

		foreach (1..1000000) {

If I watch the memory consumption with "top," I can see the memory footprint grow pretty big.
However, if I make a complex data structure of hashes like:

	foreach $a (1..100) {
		foreach $b (1..100) {
			foreach $c (1..100) {

, I don't see the memory footprint grow nearly as large as for simpler
case.  Why is this?  I understand "top" is probably not the greatest tool
for this; can anyone recommend better perl memory profiling tools?

(3)  And even though this is a perl list, can complex structures like hash
tables of hash tables be done in C?  Any sample code sources?


 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
     POST TO: spug-list at pm.org       PROBLEMS: owner-spug-list at pm.org
      Subscriptions; Email to majordomo at pm.org:  ACTION  LIST  EMAIL
  Replace ACTION by subscribe or unsubscribe, EMAIL by your Email-address
 For daily traffic, use spug-list for LIST ;  for weekly, spug-list-digest
     Seattle Perl Users Group (SPUG) Home Page: http://seattleperl.org

More information about the spug-list mailing list