SPUG: Forked Children talking back to the parent?
wildwood_players at yahoo.com
Thu Jan 18 10:21:12 CST 2001
First off I want to apologize if I am confusing John
Cokos issue by introducing my own problem, but it
seemed very closely related.
With that said,
Richard Anderson wrote:
> It might be simpler to use shared memory and a
> semaphore as opposed to managing 10 sockets.
Hmmm, I am trying to figure out what you have in mind
If I am on the right track, you are talking about
having the parent be event driven based on signals
from the children. My immediate concern there is that
I have some synchronization issues which might cause
problems in an event driven model. The parent must
wait for all children to complete a round of events
before going on to the next event. I suppose I could
keep track of that in some structure and interrogate
the structure while I wait for signals.
Jason Lamport wrote:
> First, you should consider carefully whether
> splitting the task into multiple processes will
> actually speed things up.
Im my case it definitely speeds up processing. Each
child is telneted into separate terminal servers all
over the country. In many cases, the scripts that the
parent are processing will have commands to issue on
every terminal server. By forking the children and
then sending them the commands to process in parallel,
a significant improvement is achieved since most of
the processing occurs on the terminal servers.
John Labovitz wrote:
> To handle the case of a child "hanging," you might
> consider using IO::Select (a front-end to the
> select(2) system call) to manage the connections.
Seems like a good idea, I originally looked at this
but I couldn't quite figure out the whole watching
multiple things at once deal. I will take another
look at this.
> One thought: instead of sitting in a loop waiting
> for messages from the children, what if the children
> send a signal ( via kill() ) to let the parent know
> that the child has data for it?
This sounds similar to what Richard Anderson wrote but
still presumes I am using the bidirectional sockets in
stead of shared memory for communication. I am giving
this some thought.
Dean Hudson wrote:
> Why not pick some number of jobs (n) and when @todo
> grows to n element, fork off a child process which
> then starts doing work, then have the parent pop
> those items off @todo. The long running process is
> then basically just a daemon that queues jobs and
> forks workers. Workers die when they finish their
> todo list.
This solution might work for John Cokos, but in my
circumstance, I need to maintain my connection from
the children to the terminal servers. When there is
no script running, the children are to report back to
the parent periodically showing that the connection is
up and that no other processes are running that would
preclude a user from running a script.
I want to thank everyone who has responded. I
definitely got some different ideas on how to approach
this problem. I am about to redesign the process so
the information is quite timely. I am definitely
interested in the interrupt driven approach.
My question on the interrupt driven approach is, what
happens if I am processing a signal from one child in
the parent and I receive a signal from another child?
Does it interrupt the code already processing the
first signal? I haven't done much signal processing.
Richard O. Wood
Wildwood IT Consultants, Inc.
wildwood_players at yahoo.com
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
POST TO: spug-list at pm.org PROBLEMS: owner-spug-list at pm.org
Subscriptions; Email to majordomo at pm.org: ACTION LIST EMAIL
Replace ACTION by subscribe or unsubscribe, EMAIL by your Email-address
For daily traffic, use spug-list for LIST ; for weekly, spug-list-digest
Seattle Perl Users Group (SPUG) Home Page: http://www.halcyon.com/spug/
More information about the spug-list