[Melbourne-pm] Mod_perl2

Daniel Pittman daniel at rimspace.net
Thu Sep 14 00:31:51 PDT 2006


"Andrew Speer" <andrew.speer at isolutions.com.au> writes:
> On Thu, September 14, 2006 1:13 pm, Daniel Pittman wrote:
>>
>> Only 10MB?  Lucky you.  One of my clients has around 70MB of footprint
>> from their mod_perl related code.[1]

[...]

> Yeah, bad typo there - I meant 100MB. Glad to hear others have similar
> issues though.

Uh-huh.  In think that my clients lose themselves performance in the
process as well, but whatever.  That is hardly their worst issue.

> I understand that copy on write helps, but as I think you or Scott
> alluded to - if each one of those processes manipulates a large amount
> of data, or dynmically 'requires' other modules on the fly then "real"
> memory can get used fairly quickly.

Yes.

> You are right about this happening with any app/infrastructure, but with
> Apache mod_perl a process may allocate 100MB to generate some dynamic
> content  in one request, then be asked to serve up a tiny static CSS file
> on the next one - eventually all processes pad out to 100MB, and the
> server struggles to even services requests for static content (unless
> MaxRequestsPerChild is reached or other memory management techniques are
> used).

This is one reason why I like to decouple the two -- that way my FastCGI
process can reset itself if memory use jumped more than 20MB in the last
request, or whatever, and not have to take an Apache child with it.

As Scott points out, though, MaxRequestsPerChild exists for a very, very
good reason -- and everyone should be using it to limit the lifetime of
their Apache children, no matter what they do with them. :)

[...]

> Tuning an Apache/mod_perl app to run in a corporate environment with a
> known or predictable load is fairly straightforward. Tuning the same
> app to service the most possible connections on a single box with
> unpredictable load (e.g. public facing) would seem to almost demand
> two HTTP processes - one for static content, and one for dynamic. I am
> open to suggestions as to how you could "tune" an Apache process to
> handle such a situation.

Mostly, setting MaxRequestsPerChild and the maximum number of children
based on your worst-case resource use.  That way you don't significantly
delay requests in the normal case and cope with the worst.

Using the Apache2::SizeLimit module also helps: it allows your Perl
script to request the Apache child exit early if it has consumed too
much memory.  The same facility is available for Apache1, but not
bundled.


This is, essentially, the same facility you use with FastCGI, where you
simply exit and get restarted, save applied within Apache.

>> Apache also supports FastCGI -- with the original FastCGI module and the
>> newer (and more free) FCGI module.  Both of these give the same benefits
>> as lighttpd and FastCGI, plus the advantages that mod_perl and other
>> Apache modules provide.
>
> Thanks for that - FCGI was the module I meant to refer to. I will have
> to try out Apache/FastCGI - I have not had a chance yet.

I think in general FCGI is a better solution, but I had some problems
with it causing failures of the RT mail submission tool.  I never got
around to tracking them down, but should one day.

> Anyway, none of the above was meant to bash mod_perl - I think it is a
> great piece of software and has let me doing things quickly in Apache
> that otherwise I would never have been able to do. 

Oh, likewise.  I also think that LigHTTPD is probably a great solution
to a bunch of problems. :)

Regards,
        Daniel
-- 
Digital Infrastructure Solutions -- making IT simple, stable and secure
Phone: 0401 155 707        email: contact at digital-infrastructure.com.au
                 http://digital-infrastructure.com.au/


More information about the Melbourne-pm mailing list