[Chicago-talk] Script design question

Jess Balint jbalint at gmail.com
Tue Dec 13 18:50:04 PST 2005


You could create  4 queues and just watch the queues. You could put all your
output in a log and just lock the file. You could use signals as IPC to
notify the parent process that something is done, check the queues and start
the next job. Shouldn't be too hard. I have done something like this before
for a very high volume queuing system and it worked well.

You could use threads too, but I don't think they are needed to just sit
there and wait for something to finish.

Jess

-----Original Message-----
From: chicago-talk-bounces+jbalint=gmail.com at pm.org
[mailto:chicago-talk-bounces+jbalint=gmail.com at pm.org] On Behalf Of Young,
Darren
Sent: Tuesday, December 13, 2005 7:25 PM
To: Chicago.pm chatter
Subject: [Chicago-talk] Script design question


Looking for some input on a script before I sit down and try to create
it. The basic need is to launch 4 system commands (at the same time) and
watch each one for completion and when one is done to start another.
Basically, it needs to keep 4 child processes running up to a defined
end.

If that doesn't make sense, here's what it's for. In order to run
backups on our mail system we have to use a Sun supplied binary
(imsbackup). As an argument this binary takes what are called "groups"
to perform a given backup. Those "groups" are defined in another file
(backup-groups.conf) and contains lines such as:
   groupA=a*
   groupB=b*

And so on and so forth until the letter Z. groupA are users that start
with the letter a, groupB are the b's, and so on. Now, since these
backups take so long to run we're going to run 4 of them in parallel,
each one against a different group. So, from the command line I would:
   imsbackup -i -f- /gsbims/groupA > /export/backups/groupA.bkp &
   imsbackup -i -f- /gsbims/groupB > /export/backups/groupB.bkp &

Do that for groups A-D and let them run. Then, the first one that
completed would fall off the "to-do" list and the next letter group will
be started (E in this case). Then on to F, G, etc. Testing so far says
that an individual group can take up to 2-3 hours to complete so I have
to be able to deal with processes that sit and run for a while and
produce no output. Not sure what exit codes the imsbackup program gives
back, but I'm betting it's just a 0 or 1.

Now, if one of them fails all I need to do is log the fact that it did
and move on. Additionally, I'd like to keep track of the time it took to
perform each group "thread" and log that as well. I'd prefer not to have
a script call other scripts as opposed to calling the commands
themselves, I've found that I lose tidbits here and there from
sub-scripts. While that's a preference, it's certainly not required.

The part that I need help with is how to actually spawn each child and
watch for their outcome (reliably). The waitpid() call seems to wait for
one particular process to complete or am I reading the docs incorrectly?
Am I talking about using threads in this case or am I over-engineering
what I want done? I've done plenty of whil(<PIPE>) stuff previously but
only on a single process.

Any thoughts on this would be very appreciated.

-------------------------------------------------------------
| Darren Young              | http://www.chicagogsb.edu     |
| Senior UNIX Administrator | darren.young at chicagogsb.edu   |
| University of Chicago GSB | darren.young at gsb.uchicago.edu |
-------------------------------------------------------------
_______________________________________________
Chicago-talk mailing list
Chicago-talk at pm.org
http://mail.pm.org/mailman/listinfo/chicago-talk



More information about the Chicago-talk mailing list