andrew.speer at isolutions.com.au
Wed Sep 13 22:16:40 PDT 2006
On Thu, September 14, 2006 1:13 pm, Daniel Pittman wrote:
> Only 10MB? Lucky you. One of my clients has around 70MB of footprint
> from their mod_perl related code.
> Also, remember that most of that memory is shared between Apache
> instances in a copy-on-write fashion. So, for all that you have the
> same 70MB each fork you end up with 2MB of extra memory used each time.
> The overall memory figure can be *very* misleading, and it is very hard
> to get an impression of the level of private memory a Linux binary uses,
> because of the way copy-on-write data is accounted.
Yeah, bad typo there - I meant 100MB. Glad to hear others have similar
I understand that copy on write helps, but as I think you or Scott alluded
to - if each one of those processes manipulates a large amount of data, or
dynmically 'requires' other modules on the fly then "real" memory can get
used fairly quickly.
You are right about this happening with any app/infrastructure, but with
Apache mod_perl a process may allocate 100MB to generate some dynamic
content in one request, then be asked to serve up a tiny static CSS file
on the next one - eventually all processes pad out to 100MB, and the
server struggles to even services requests for static content (unless
MaxRequestsPerChild is reached or other memory management techniques are
It seems to be "better" (or at least more managable) to say e.g. "OK,
10-20 processes will be dedicated to dynamic content, and can grow to
100MB each, and 40-80 processes will handle dynamic content, and will
remain about the same size". Correct me if I am wrong, but you cannot seem
to do this with Apache/mod_perl ? Looks like you can with Apache/FastCGI
>> There are ways and means around this problem, but they all seem a bit
>> kludgy - front-end proxy servers, multiple Apache servers (Apache with
>> mod_perl for dynamic content, straight Apache for static content).
> ...er, or you could tune your Apache process correctly. That way you
> fixed the problem rather than trying to work around it.
Tuning an Apache/mod_perl app to run in a corporate environment with a
known or predictable load is fairly straightforward. Tuning the same app
to service the most possible connections on a single box with
unpredictable load (e.g. public facing) would seem to almost demand two
HTTP processes - one for static content, and one for dynamic. I am open to
suggestions as to how you could "tune" an Apache process to handle such a
> Apache also supports FastCGI -- with the original FastCGI module and the
> newer (and more free) FCGI module. Both of these give the same benefits
> as lighttpd and FastCGI, plus the advantages that mod_perl and other
> Apache modules provide.
Thanks for that - FCGI was the module I meant to refer to. I will have to
try out Apache/FastCGI - I have not had a chance yet.
Anyway, none of the above was meant to bash mod_perl - I think it is a
great piece of software and has let me doing things quickly in Apache that
otherwise I would never have been able to do. I was just discussing some
of the limitations it seems to have, and the alternatives that are around
More information about the Melbourne-pm