[Pdx-pm] Q about ithreads && signals

Gavin LaRowe glarow126 at attbi.com
Mon Nov 18 11:13:08 CST 2002


> > Can you put a program onto the source machine? A very simple server
could do the trick:
> > A client connects to its port, and the server sends the data down the
pipe
> > every time it changes. Pretty straightforward TCP socket stuff should do
> > the trick.
>
> The short answer is no.  I'm stuck with talking to a crappy old HTTP
server in another organization thats probably ~1995
> vintage. The good news is the data set is tiny (about 1K).  The problem
I'm trying to solve is reducing the load on the crappy
> server.  I'm only severing it back up as http again because of legacy
clients on our end - this will change with time.

You've probably already solved your problem, but Why not use Sockets? As
others have mentioned, threads and signals usually don't play well together
unless you are a hedon for one-off POSIX fun. If you go the socket route,
why not use a bi-directional client? With 1K, it should produce a barely
noticeable load on the "crappy old HTTP server." If I remember right, there
is an example of a bi-directional client in the Perl Cookbook.

Also if it's +/- 1k, why not just use a shell script with lynx and sdiff -s
(or something similar ... rsync?) OR use LWP and Perl to request the "data
set" or file, compare it locally, and then distribute to your "multiple
clients internally." Why do you need the POSIX alarms? Again, this is all
based on the idea that you aren't a hedon for one-off POSIX fun :-) Hope
this helps ...

Gavin

----- Original Message -----
From: "Joshua Hoblitt" <jhoblitt at ifa.hawaii.edu>
To: "Tom Phoenix" <rootbeer at redcat.com>
Cc: <pdx-pm-list at pm.org>
Sent: Sunday, November 17, 2002 1:52 PM
Subject: Re: [Pdx-pm] Q about ithreads && signals


> > It's not a lot of cpu power. It's not even a need for threads and
signals.
> > Dress this up as needed:
>
> Well it's threads or forks... the data has to be given out again.  I
decided to play with threads.  In the medium term the data will have to be
aquired from multiple sources and averaged.
>
> >   &do_query;
> >   my $next_query = time + 10;
> >   while (1) {
> >     # Wait until it's time...
> >     my $now = time;
> >     if ($now < $next_query) {
> >       # It's not time yet
> >       sleep($next_query - $now);
> >     } elsif ($now > $next_query) {
> >       # Missed the moment. Do whatever you need to do for that case.
> >     }
> >     &do_query;
> >     $next_query += 10;
> >   }
>
> Very slick! I'll try this.
>
> > If the source data is really changing every ten seconds, though, maybe
> > http isn't the best way to transfer it from site to site. Can you put a
> > program onto the source machine? A very simple server could do the
trick:
> > A client connects to its port, and the server sends the data down the
pipe
> > every time it changes. Pretty straightforward TCP socket stuff should do
> > the trick.
>
> The short answer is no.  I'm stuck with talking to a crappy old HTTP
server in another organization thats probably ~1995 vintage. The good news
is the data set is tiny (about 1K).  The problem I'm trying to solve is
reducing the load on the crappy server.  I'm only severing it back up as
http again because of legacy clients on our end - this will change with
time.
>
> > Good luck with it!
>
> Thanks for your Help!
>
> -J
>
> _______________________________________________
> Pdx-pm-list mailing list
> Pdx-pm-list at mail.pm.org
> http://mail.pm.org/mailman/listinfo/pdx-pm-list
>




More information about the Pdx-pm-list mailing list