SPUG: Shared memory system

Colin Meyer cmeyer at helvella.org
Thu Dec 6 16:18:56 CST 2001


Hi, Peter,

On Thu, Dec 06, 2001 at 01:49:04PM -0800, Peter Darley wrote:
> Colin,
> 	Thanks for the link to mod_perl guide, it's already proven quite useful.
> I'd forgotten about the Apache::DBI (tho it doesn't seem to make much of a
> difference on this system)

I've experienced db connect times as high as several seconds, but if
your db is quite local, then the connection time savings might not be so
high. Another benefit Apache::DBI is being able to cache frequently used
statement handles, if your backend driver (DBD module) supports it.

>       I'm wanting to cache report data from the database so it's available for
> formatting when people request it.  I'd like to have something in-memory, as
> I have tons of available memory on this machine, and would like to get some
> use out of it;  I would also like to do this somewhat as a learning
> exercise.

The section of the guide that you'll want to study is:
http://perl.apache.org/guide/performance.html#Know_Your_Operating_System

The shared memory modules (IPC::*) are difficult to use (don't take well
to holding Perl data structures), and somewhat buggy (at least on linux,
under stress testing - I got segfaults) in my experience.

The typical way to share memory in a mod_perl app is to take advantage
of your os's "copy-on-write" memory handling. A forked process receives
a copy of its parent's memory. But your os only copies the pages that
have had data written to since the fork. By preloading data into the
parent apache/mod_perl process, and not writing to it, it remains shared
between all of the children. To take advantage of this scheme you'd have
to restart apache on a regular basis (daily) and have the parent process
do the database read before it forks the children.

>       So, I want to be able to share variables between processes
> without writing them to disk. 

Another way about it would be to take advantage of your os's disk file
caching ability.  You could have a cron job that would at some regular
interval access the db, select out report data, and save it to a file,
in Storable format, or whatever.  Your webapp would read the data from 
the file.  Your os will take advantage of any excess ram, and keep the
often accessed files in disk buffers - caching that is transparent
to your application.

If you have the time, it isn't too difficult to set up simple test
scenarios and benchmark them.  When dealing with webapp caching,
layered with your os being clever, the results can sometimes be 
unpredictable. 

Have fun,
-C.


> Thanks, Peter Darley

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
     POST TO: spug-list at pm.org       PROBLEMS: owner-spug-list at pm.org
      Subscriptions; Email to majordomo at pm.org:  ACTION  LIST  EMAIL
  Replace ACTION by subscribe or unsubscribe, EMAIL by your Email-address
 For daily traffic, use spug-list for LIST ;  for weekly, spug-list-digest
     Seattle Perl Users Group (SPUG) Home Page: http://zipcon.net/spug/





More information about the spug-list mailing list