Cleaning Up on Edit from Perl

Stas Bekman stas at
Tue Jun 10 18:39:40 CDT 2003

Scott Penrose wrote:
> Hash: SHA1
> Hey Dudes,
> We have a daemon which takes up over 120MB before exit. It has HEAPS of 
> objects all open. It performs adequately and at least 80MB of the system 
> is shared, so that when we fork we are getting a best economy of memory.
> Like Apache we tend to prefork and only allow our processes to run for a 
> while to capture those occasional memory leaks (somewhere between 25 and 
> 200 requests before exiting).
> On Exit the daemon flushes log entries and dies. This sends a HUP to the 
> parent which restarts a new child.
> My problem is that on exit Perl unwinds each of its objects that is 
> created, calling DESTROY where possible on each. This takes a MASSIVE 
> amount of time, during which, even the normally read only modules (with 
> huge data) can tend to come out of share as they are written to. So time 
> is huge and memory is huge !
> Two question - Can and Should I do a ABSOLUTE EXIT. What I mean to say 
> is - QUIT the process (equal to do a kill -9).
> Even if my code was perfect Perl still deallocates all its memory - 
> which all takes a while with such a large amount of code.
> So... I know that I can send myself a KILL, and I know I can do an exit 
> in C - but ...
>     * Should I?
>     * Is there a Perl way to do it (properly - not kill)
> There are other good reasons to do this - eg: a system is in an unstable 
> state (database are fragile or going down) and you don't want your 
> objects to exit.

The mod_perl guide comes to help:

PERL_DESTRUCT_LEVEL=-1 perl my_heavy_destruct_program

Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker     mod_perl Guide --->
mailto:stas at

More information about the Melbourne-pm mailing list