From jasona at inetarena.com Tue Jun 4 16:30:46 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:34 2004 Subject: Complex return value Message-ID: <004d01c20c0f$1a751a40$5b86fc83@archer> I have a happy and well functioning hash of arrays in a subroutine. If I return that hash to the main program, there's nothing useful inside. I've also tried passing a pointer to the global hash I want the data in and filling the data into that which didn't seem to work, although I don't usually use pointers, so I could have been doing that wrong. What's the best way to get a hash of arrays back into a hash in my main function? Jason White -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.pm.org/archives/pdx-pm-list/attachments/20020604/3edaaa7f/attachment.htm From merlyn at stonehenge.com Tue Jun 4 17:23:18 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:34 2004 Subject: Complex return value In-Reply-To: <004d01c20c0f$1a751a40$5b86fc83@archer> References: <004d01c20c0f$1a751a40$5b86fc83@archer> Message-ID: <863cw2isyx.fsf@blue.stonehenge.com> >>>>> "Jason" == Jason White writes: Jason> I have a happy and well functioning hash of arrays in a subroutine. Jason> If I return that hash to the main program, there's nothing useful inside. That doesn't make sense. Show a 10-line snippet of code that demonstrates the problem. Doing so will almost certainly show that the problem isn't *there*, it's most likely somewhere else. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From tkil at scrye.com Tue Jun 4 18:05:51 2002 From: tkil at scrye.com (Tkil) Date: Wed Aug 4 00:05:34 2004 Subject: Complex return value In-Reply-To: <004d01c20c0f$1a751a40$5b86fc83@archer> References: <004d01c20c0f$1a751a40$5b86fc83@archer> Message-ID: >>>>> "JW" == Jason White writes: JW> I have a happy and well functioning hash of arrays in a JW> subroutine. If I return that hash to the main program, there's JW> nothing useful inside. How are you returning it? How are you grabbing the return value? Your phrase "nothing useful inside" makes me think you are trying to return it in list context, but are trying to assign it to a scalar. In that case, the scalar will most likely end up with 2x the number of keys in the hash (or maybe the bucket statistics -- I don't remember exactly.) The basic idea is to return it the same way you try to catch it. So, these two are both "right" ways of doing it, with the latter being more efficient for large hashes: | sub return_hash_as_list { my %h = qw(a b c d); return %h; } | my %hash = return_hash_as_list(); | | sub return_hash_as_ref { my %h = qw(a b c d); return \%h; } | my $href = return_hash_as_ref(); With the same two definitions, these are the wrong ways to do it: | my %broken_hash = return_hash_as_ref(); # exception, odd num of vals in hash | my $borken_href = return_hash_as_list(); # value is "2/8", not an href JW> I've also tried passing a pointer to the global hash I want the JW> data in and filling the data into that which didn't seem to work, JW> although I don't usually use pointers, so I could have been doing JW> that wrong. If you mean "reference" and not pointer, than that should work (unless you're masking it with a "my" or similar -- this is all under "use strict" and "-w", isn't it?). If we do: | sub fill_href { my $href = shift; $href->{a} = "b"; $href->{c} = "d"; } Then you can use it like this: | my %hash = ( e => "f" ); | print join ", ", map "$_ => $hash{$_}", keys %hash; | fill_href \%hash; | print join ", ", map "$_ => $hash{$_}", keys %hash; And get: | e => f | e => f, a => b, c => d JW> What's the best way to get a hash of arrays back into a hash in my JW> main function? There's rarely any one good definition of "best". Other than making sure you really spawn off references to new arrays for each value, any of the above techniques (that are supposed to work!) should work. t. TIMTOWTDI From jasona at inetarena.com Tue Jun 4 18:04:21 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:34 2004 Subject: Complex return value References: <004d01c20c0f$1a751a40$5b86fc83@archer> <3CFD3B09.74B8FA7@eli.net> Message-ID: <006b01c20c1c$2cb62f70$5b86fc83@archer> Thank you Todd, the pass by reference model worked perfectly well. Since I heard a few other suggestions along with a request for a snippet, I'm supplying the snippet of orignial code here. This segment of code is reading a list of rules from a configuration file. It is returning a numerically indexed hash since the order of the rules is significant. my %rules = GetArrayFileData($configfile); ############################################# # GetArrayFileData retrieves array data from a file # accepts $(filename) # returns a numerically keyed hash of arrays with file line contents # requires available filehandle ############################################# sub GetArrayFileData($){ my (%TmpHash,$i,$tmp,$value,@Value,$key); my ($filename)=@_; open(INFILE, "$filename"); blah blah blah - code bringing the data in from the file and assigning it to %TmpHash close(INFILE); if($opt_v){ Blah Blah - code showing that %TmpHash contains valid data at this point }#end verbose code return %TmpHash; #################################### }#end GetArrayFileData #################################### My hash contained junk. returning a pointer to the private hash and %{}ing that pointer worked fine. Jason White ----- Original Message ----- From: "Todd Caine" To: "Jason White" Cc: "Pdx-Pm-List@Pm. Org" Sent: Tuesday, June 04, 2002 3:11 PM Subject: Re: Complex return value > Hi Jason, > > You can use "return by value". > > my %h = &foo(); > > sub foo { > my %hol; > return %hol; > } > > or you can use "return by reference". > > my $href = &foo(); > > sub foo { > my %hol; > return \%hol; > } > > or you can use a closure. > > my %h; > > &foo(); > > sub foo { > %h = (); > } > > The return by value stuff is a bit slow if you are returning > a huge data structure. In most cases it is probably > sufficient. > > Regards, > Todd > > > Jason White wrote: > > > > I have a happy and well functioning hash of arrays in a > > subroutine. > > If I return that hash to the main program, there's nothing > > useful inside. > > > > I've also tried passing a pointer to the global hash I > > want the data in and filling the data into that which > > didn't seem to work, although I don't usually use > > pointers, so I could have been doing that wrong. > > > > What's the best way to get a hash of arrays back into a > > hash in my main function? > > > > Jason White > TIMTOWTDI From joe at joppegaard.com Wed Jun 5 15:02:22 2002 From: joe at joppegaard.com (Joe Oppegaard) Date: Wed Aug 4 00:05:34 2004 Subject: Monitoring floppy disk access - linux Message-ID: I've been trying to find a way to monitor whenever the floppy disk is actually being accessed under linux. I don't mean just when it is mounted, I mean more like when it is actually read. (ie, the first time you do an ls, but not if you did two ls's in a row and it just read it from memory the second time). I'm trying to calculate the time that my floppy drive is actually in use. I don't even know where to start looking for this answer. Unless I missed something, Gkrellm doesn't seem to be able to monitor disk activity from the floppy, or else I could extract the information from there somehow. Any ideas? -- -Joe Oppegaard 360.910.1970 http://joppegaard.com TIMTOWTDI From todd_caine at eli.net Wed Jun 5 15:43:46 2002 From: todd_caine at eli.net (Todd Caine) Date: Wed Aug 4 00:05:34 2004 Subject: Monitoring floppy disk access - linux References: Message-ID: <3CFE7802.3DFCD82E@eli.net> Have you searched the /proc filesystem? There's usually a lot of good information in there. Todd Joe Oppegaard wrote: > > I've been trying to find a way to monitor whenever the floppy disk is > actually being accessed under linux. > > Any ideas? TIMTOWTDI From jasona at inetarena.com Wed Jun 5 15:39:36 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:35 2004 Subject: More Hash problems Message-ID: <003101c20cd1$1bc32ff0$5b86fc83@archer> I'm having some trouble with the contents of my hash changing. I start with a numerically keyed list of arrays. The first element of each array is a name. The goal is to have one hash of rules indexable by integer keys and another indexable by name. Here is my code: my (%rules) = blah blah blah; #Works and indexes fine while(($key,$value)=each(%rules)){ ($key,@{$value})=@{$value}; $RULES{$key}=$value; } Now %RULES Works and indexes fine with the names as keys but the first element of my arrays are gone in the first hash. I also tried adding %TmpHash=%rules and then parsing the %TmpHash, but it still affected %rules. When I say $a=$b am I just setting a pointer? How can I actually copy the values? is ($key,$value)=each(%rules) effectivly popping my first element? should I switch to a foreach keys syntax? Jason White -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.pm.org/archives/pdx-pm-list/attachments/20020605/41328d09/attachment.htm From rb-pdx-pm at redcat.com Wed Jun 5 15:51:23 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: Monitoring floppy disk access - linux In-Reply-To: Message-ID: On Wed, 5 Jun 2002, Joe Oppegaard wrote: > I've been trying to find a way to monitor whenever the floppy disk is > actually being accessed under linux. I don't mean just when it is > mounted, I mean more like when it is actually read. (ie, the first time > you do an ls, but not if you did two ls's in a row and it just read it > from memory the second time). I'm trying to calculate the time that my > floppy drive is actually in use. I can't figure out why you'd want such a thing. But I'm not just saying that; it has a direct bearing on what you really want: Do you want to know what percentage of the time the disk is spinning, how many hours (or rotations) a particular diskette or drive motor has turned, how many sector reads have happened altogether or in the last hour or since mounting, or something else? You may be able to get something like what you want with lsof, but you're probably talking about something that you could only get by hacking into the floppy drivers (or something equally low-level). That is to say, if you need to know about something that depends upon whether or not the cache was read or not, something at the caching level is going to be needed, of course. But something tells me that you already thought of that. Hm. There seeems to be something of an echo in here. Or something. :-) --Tom TIMTOWTDI From jasona at inetarena.com Wed Jun 5 15:56:17 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:35 2004 Subject: regular expression matching Message-ID: <005501c20cd3$70819b10$5b86fc83@archer> I usually only use regular expressions for substitutions s///g etc. How exactly do I use a regular expression as a boolean value? for example: $service="cns-noc_server1"; $rule="cns-noc"; if( WHAT DO I PUT HERE){ print "$rule is in $service \n"; } else { print "$rule isn't in $service \n"; } Jason White -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.pm.org/archives/pdx-pm-list/attachments/20020605/84ba6094/attachment.htm From ckuskie at dalsemi.com Wed Jun 5 16:26:59 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: More Hash problems In-Reply-To: <003101c20cd1$1bc32ff0$5b86fc83@archer>; from jasona@inetarena.com on Wed, Jun 05, 2002 at 01:39:36PM -0700 References: <003101c20cd1$1bc32ff0$5b86fc83@archer> Message-ID: <20020605142659.C6585@dalsemi.com> On Wed, Jun 05, 2002 at 01:39:36PM -0700, Jason White wrote: > I'm having some trouble with the contents of my hash changing. > I start with a numerically keyed list of arrays. The first element of each array is a name. > The goal is to have one hash of rules indexable by integer keys and another indexable by name. > > Here is my code: 1: my (%rules) = blah blah blah; #Works and indexes fine 2: while(($key,$value)=each(%rules)){ 3: ($key,@{$value})=@{$value}; 4: $RULES{$key}=$value; 5: } I think you're a victim of aliasing. You see, here's what your code really does: 1: Assign stuff to hash 2: take each key and a reference to the value from the hash 3: modify the actual contents of the hash value via the reference ($key, @{ $value }) = @{ $value }; 4: create the new hash key/value pair. There are several screwy things going on: 1) in the each, you never use the key! Use values with a for loop instead. foreach $v ( values(%hash) ) { 2) Don't recycle variables names carelessly. If you'd used a different variable name for the array for the 2nd hash, it would work okay. I tried this and it works: my ($key, $v); my %HASH; my @arr; foreach $v ( values(%hash) ) { @arr = @{ $v }; $key = shift @arr; $HASH{$key} = \@arr; } Colin TIMTOWTDI From ckuskie at dalsemi.com Wed Jun 5 16:29:24 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: regular expression matching In-Reply-To: <005501c20cd3$70819b10$5b86fc83@archer>; from jasona@inetarena.com on Wed, Jun 05, 2002 at 01:56:17PM -0700 References: <005501c20cd3$70819b10$5b86fc83@archer> Message-ID: <20020605142924.D6585@dalsemi.com> On Wed, Jun 05, 2002 at 01:56:17PM -0700, Jason White wrote: > I usually only use regular expressions for substitutions s///g etc. > > How exactly do I use a regular expression as a boolean value? > for example: man perlop Regexp Quote-Like Operators Learning Perl, v3 Colin TIMTOWTDI From joe at joppegaard.com Wed Jun 5 16:31:41 2002 From: joe at joppegaard.com (Joe Oppegaard) Date: Wed Aug 4 00:05:35 2004 Subject: regular expression matching In-Reply-To: <005501c20cd3$70819b10$5b86fc83@archer> Message-ID: On Wed, 5 Jun 2002, Jason White wrote: > I usually only use regular expressions for substitutions s///g etc. > > How exactly do I use a regular expression as a boolean value? > for example: > > $service="cns-noc_server1"; > $rule="cns-noc"; > > if( WHAT DO I PUT HERE){ > print "$rule is in $service \n"; > } else { > print "$rule isn't in $service \n"; > } if ($service =~ m/$rule/) { print "$rule is in $service\n"; } else { print "$rule isn't in $service\n"; } -Joe Oppegaard 360.910.1970 http://joppegaard.com TIMTOWTDI From joe at joppegaard.com Wed Jun 5 16:34:45 2002 From: joe at joppegaard.com (Joe Oppegaard) Date: Wed Aug 4 00:05:35 2004 Subject: Monitoring floppy disk access - linux In-Reply-To: Message-ID: On Wed, 5 Jun 2002, Tom Phoenix wrote: > I can't figure out why you'd want such a thing. But I'm not just saying > that; it has a direct bearing on what you really want: Do you want to know > what percentage of the time the disk is spinning, how many hours (or > rotations) a particular diskette or drive motor has turned, how many > sector reads have happened altogether or in the last hour or since > mounting, or something else? > > You may be able to get something like what you want with lsof, but you're > probably talking about something that you could only get by hacking into > the floppy drivers (or something equally low-level). That is to say, if > you need to know about something that depends upon whether or not the > cache was read or not, something at the caching level is going to be > needed, of course. But something tells me that you already thought of > that. > > Hm. There seeems to be something of an echo in here. Or something. :-) Agh! That messed with my eyes. ;) I actually wanted to monitor how many hours a drive has been used, but it's not too important. Probably be more effort then it is worth. -- -Joe Oppegaard 360.910.1970 http://joppegaard.com TIMTOWTDI From cp at onsitetech.com Wed Jun 5 16:16:51 2002 From: cp at onsitetech.com (Curtis Poe) Date: Wed Aug 4 00:05:35 2004 Subject: More Hash problems References: <003101c20cd1$1bc32ff0$5b86fc83@archer> Message-ID: <007001c20cd6$4f831300$1a01a8c0@ot.onsitetech.com> I'm having some trouble figure out what you are trying to do. Here's your code: my (%rules) = blah blah blah; #Works and indexes fine while(($key,$value)=each(%rules)){ ($key,@{$value})=@{$value}; $RULES{$key}=$value; } Let's walk through this step-by-step. Explanation is after the line of code. my (%rules) = blah blah blah; #Works and indexes fine Okay, that's fine, though you don't need the parens around %rules, but they don't hurt. while(($key,$value)=each(%rules)){ That's syntactically correct. However, I would declare the $key and $value with 'my': while ( my ( $key, $value ) = each %rules ) { That doesn't change how this snippet runs, but it means that they won't exist outside fo the while loop. Later, if you refer to $key or $value, you won't have to worry about whether or not they still have a value from the while loop. ($key,@{$value})=@{$value}; Ugh. This made my brain hurt. What this does is assign the first scalar in the $value arrayref to $key and puts the rest of the values in $value. I think this would be clearer: $key = shift @$value; Of course, that overwrites the key, which may not be what you want. $RULES{$key}=$value; You have $RULES in upper-case. Is that what you intended? You're copying these values to a different hash. Now, what I think might be your problem is when you do this: while(($key,$value)=each(%rules)){ ($key,@{$value})=@{$value}; Since $value is a reference to an array, if you change the underlying array, you're changing the value of $value, including in the original array. If that's your problem, and if you really mean to write this to another hash, here's what I would do: my %rules = blah blah blah; #Works and indexes fine while( my ($key,$value) = each %rules) { my @values = @$value; # this will copy the data without changing $value $key = shift @values; $RULES{$key}=\@values; } What I did was copy the values from the array reference to an actual array. With that, you can alter the new array all you want without affecting the original array. The new rules hash will have keys equal to the first value in that array, and the values will be an array reference of the original array values, minus the first element (which was shifted off). Does that make sense, or did I miss the boat? -- Cheers, Curtis Poe Senior Programmer ONSITE! Technology, Inc. www.onsitetech.com 503-233-1418 Taking e-Business and Internet Technology To The Extreme! TIMTOWTDI From cp at onsitetech.com Wed Jun 5 16:23:15 2002 From: cp at onsitetech.com (Curtis Poe) Date: Wed Aug 4 00:05:35 2004 Subject: regular expression matching References: <005501c20cd3$70819b10$5b86fc83@archer> Message-ID: <007701c20cd7$347b1a20$1a01a8c0@ot.onsitetech.com> $service="cns-noc_server1"; $rule="cns-noc"; if( $service =~ /\Q$rule\E/ ){ print "$rule is in $service \n"; } else { print "$rule isn't in $service \n"; } \Q and \E help to ensure that the text is interpreted literally (i.e., characters with special meaning to a regex are treated literally. Read 'perldoc perlre' for more information about pitfalls using \Q and \E. -- Cheers, Curtis Poe Senior Programmer ONSITE! Technology, Inc. www.onsitetech.com 503-233-1418 Taking e-Business and Internet Technology To The Extreme! ----- Original Message ----- From: Jason White To: pdx-pm-list@pm.org Sent: Wednesday, June 05, 2002 1:56 PM Subject: regular expression matching I usually only use regular expressions for substitutions s///g etc. How exactly do I use a regular expression as a boolean value? for example: Jason White TIMTOWTDI From rb-pdx-pm at redcat.com Wed Jun 5 16:38:20 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: More Hash problems In-Reply-To: <003101c20cd1$1bc32ff0$5b86fc83@archer> Message-ID: On Wed, 5 Jun 2002, Jason White wrote: > I'm having some trouble with the contents of my hash changing. Well, the contents are being changed by you, me, or Santa Claus. I'll let you narrow that down. :-) > I start with a numerically keyed list of arrays. The first element of > each array is a name. The goal is to have one hash of rules indexable by > integer keys and another indexable by name. "Indexable by integer keys" sounds like "a job for an array", doesn't it? > Here is my code: > my (%rules) = blah blah blah; #Works and indexes fine There's no extra charge for writing your code like this: my(%rules) = qw{ fred 210 barney 170 dino 30 }; # or whatever... ...and it has the added benefit that we can see more about what you're doing, and thereby offer more (and better) help. > while(($key,$value)=each(%rules)){ > ($key,@{$value})=@{$value}; Isn't that just the hard way to write this? $key = shift @$value; Not only the hard way to write it, but it's more work for Perl. (Less efficient.) And why are you reusing $key here? That's just confusing. If you don't care what the keys are, use a foreach loop on values(%rules), since that will be clearer to your maintenance programmer. Or at least, use two different variables for two different things. Variables are cheap; use all you need. > $RULES{$key}=$value; > } So, where are the numbers? You've built a new hash (presuming that %RULES was empty before you started), but I can't see that either one is keyed by number. Unless %rules was set up that way at the start, but you didn't show its real data to us. (Hint, hint. :-) > Now %RULES Works and indexes fine with the names as keys but the first > element of my arrays are gone in the first hash. And in %RULES, too, of course. Ah, I thought that that was a shift. Maybe you wanted something like this? my $second_key = $value->[0]; > I also tried adding %TmpHash=%rules and then parsing the %TmpHash, but > it still affected %rules. That's because it's not a "deep copy". Copying a reference gives you a copy of a reference, not a new reference pointing to a copy of the data. (But if you really want a deep copy, that's possible. Finding the correct module on CPAN is left as an exercise for the ambitious student.) I hope this gets you back on the right track. Good luck with it! --Tom TIMTOWTDI From rb-pdx-pm at redcat.com Wed Jun 5 16:47:39 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: Monitoring floppy disk access - linux In-Reply-To: Message-ID: On Wed, 5 Jun 2002, Joe Oppegaard wrote: > I actually wanted to monitor how many hours a drive has been used, Ah, that's easy: Set up a timer to turn on or off at the same time as the drive does. Oh, I should specify: a mechanical timer, like the ones used on, for example, aircraft engines and bulldozers, since they wear by the hour. See, it's a hardware problem.... "How many software techs does it take to change a lightbulb? None; it's a hardware problem." Problem solved. :-) Seriously, though, I'm sure _somebody_ makes such a timer. But here's another way to attack the problem without low-level software hackery: Get a sensor that reports when the power is on (voltage or current sensor, probably) and attaches to an available port. Then you just write a simple program that polls the port once per second, or whatever. You should be able to do that in Perl, I think. Of course, now you're into low-level _hardware_ hackery. Oh, well! --Tom TIMTOWTDI From joe at joppegaard.com Wed Jun 5 17:03:32 2002 From: joe at joppegaard.com (Joe Oppegaard) Date: Wed Aug 4 00:05:35 2004 Subject: Monitoring floppy disk access - linux In-Reply-To: Message-ID: On Wed, 5 Jun 2002, Tom Phoenix wrote: > > See, it's a hardware problem.... "How many software techs does it take to > change a lightbulb? None; it's a hardware problem." Problem solved. :-) > Haha, bullseye! > Seriously, though, I'm sure _somebody_ makes such a timer. But here's > another way to attack the problem without low-level software hackery: Get > a sensor that reports when the power is on (voltage or current sensor, > probably) and attaches to an available port. Then you just write a simple > program that polls the port once per second, or whatever. You should be > able to do that in Perl, I think. Of course, now you're into low-level > _hardware_ hackery. Oh, well! > Great ideas, thanks. Should be fun to setup. Off to the garage. ;) -- -Joe Oppegaard 360.910.1970 http://joppegaard.com TIMTOWTDI From joshua_keroes at eli.net Wed Jun 5 17:26:30 2002 From: joshua_keroes at eli.net (Joshua Keroes) Date: Wed Aug 4 00:05:35 2004 Subject: Matching stuff Message-ID: <20020605222630.GF28972@eli.net> Found out my posts have been going to /dev/null ever since my company mailserver decided that my emails really aren't From: jkeroes but joshua_keroes. As the Church-lady would say, "Isn't that special?" So here's the this post. If I can find the other, I'll repost it too: __BEGIN__ use strict; my $service = "cns-noc_server1"; my $rule = "cns-noc"; # What you want: if ( $service =~ /$rule/ ) { # do stuff } # another possibility: if ( index $service, $rule ) { # do stuff } # silly if ($service =~ s/$rule/$rule) { # do stuff; } # sillier for ( 0 .. length($service) - length($rule) ) { if ( substr($service, $_, length($rule)) eq $rule ) { # do stuff } } # yet more silly my @maybe = map { substr $service, $_, length $rule } 0 .. length $service; if ( grep { $_ eq $rule } @maybe ) { # do stuff } # beep beep honk honk my %maybe = map { substr($service, $_, length $rule) => 0 } 0 .. length $service; if ( exists $maybe{$rule} ) { # do stuff } # sill-to-the-y my %permutations; for my $start (0..length $service) { for my $len (1..length($service) - $start) { $permutations{ substr $service, $start, $len }++; } } if ( exists $permutations{$rule} ) { # do stuff } # Other assorted silliness shall be left as an exercise to the reader. __END__ Wuv, Joshua TIMTOWTDI From tkil at scrye.com Wed Jun 5 18:51:14 2002 From: tkil at scrye.com (Tkil) Date: Wed Aug 4 00:05:35 2004 Subject: More Hash problems In-Reply-To: References: Message-ID: >>>>> "Jason" == Jason White writes: Jason> I start with a numerically keyed list of arrays. The first Jason> element of each array is a name. By "keyed list", do you mean hashes with numeric keys and references to arrays for values? That is: | my %num_keyed = ( 3 => [ 'three', 1, 2, 3, 4 ], | 11 => [ 'eleven', 4, 3, 9, 1, 'foo', 'bar' ], | 17 => [ 'seventeen', 2, 4, 'gizzard' ] ); Jason> The goal is to have one hash of rules indexable by integer keys Jason> and another indexable by name. Do you want the values of these two hashes to refer to the same instances of the rules (Tom's "shallow copy", or aliasing), or do you want two totally different and independent ["deep"] copies? Anyway, shallow copy of values: | my %name_keyed_shallow = map { $_->[0] => $_ } values %num_keyed; Or: | my %name_keyed_shallow; | foreach my $rule ( values %num_keyed ) | { | my $name = $rule->[0]; | $name_keyed_shallow{$name} = $rule; | } Note how the variable names make this snippit almost self-documenting. I personally prefer the "map" version; it's more compact, probably faster, and it more clearly communicates my intent: taking a list as input, getting a transformed list as output. Deep copy is similar, except I create a new copy of the rule array by dereferencing it and then putting it inside an anonymous array ref: | my %name_keyed_deep = map { $_->[0] => [ @$_ ] } values %num_keyed; To see the difference, have fun with this: | use Data::Dumper qw(); | print Data::Dumper->Dump( | [ \%num_keyed, \%name_keyed_shallow, \%name_keyed_deep ], | [ qw( num_keyed name_keyed_shallow, name_keyed_deep ) ] ); >>>>> "Tom" == Tom Phoenix writes: Tom> "Indexable by integer keys" sounds like "a job for an array", Tom> doesn't it? Tom, I agreed with most of what you said, but this particular issue depends on how big / sparse the integer keys are. If my integer keys are (1, 10, 100, 1000, 10000), then a hash is the better choice. t. TIMTOWTDI From al at shadowed.net Wed Jun 5 20:26:37 2002 From: al at shadowed.net (Allison Randal) Date: Wed Aug 4 00:05:35 2004 Subject: Apocalypse 5 Message-ID: <20020606012637.GA15036@shadowed.net> It has been slashdotted and perl.com-ed, but just in case you missed it, Apocalypse 5 is out: http://www.perl.com/pub/a/2002/06/04/apo5.html This episode: regular expressions. There's alot of cool stuff in this one. Perl rules! (excuse the bad pun) :) Allison TIMTOWTDI From rb-pdx-pm at redcat.com Wed Jun 5 21:09:58 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: Matching stuff In-Reply-To: <20020605222630.GF28972@eli.net> Message-ID: On Wed, 5 Jun 2002, Joshua Keroes wrote: > if ( $service =~ /$rule/ ) { That's fine much of the time. But if there's a chance that $rule could have metacharacters, or could be empty, you'll want to use \Q in front of it. > if ( index $service, $rule ) { Eek! No, that's not right. Remember that index will return 0 (which is false) if the substring is found at the beginning of the string, and -1 (which is true) if it's not found at all. You must have meant this: if (-1 != index $service, $rule) { I don't think you were trying to be that silly... until the later items. --Tom TIMTOWTDI From tkil at scrye.com Wed Jun 5 22:50:46 2002 From: tkil at scrye.com (Tkil) Date: Wed Aug 4 00:05:35 2004 Subject: regular expression matching In-Reply-To: <005501c20cd3$70819b10$5b86fc83@archer> References: <005501c20cd3$70819b10$5b86fc83@archer> Message-ID: >>>>> "Jason" == Jason White writes: Jason> How exactly do I use a regular expression as a boolean value? You just evaluate it in a boolean context. But that's probably not what you meant. Jason> $service="cns-noc_server1"; Jason> $rule="cns-noc"; Jason> if( WHAT DO I PUT HERE ) { Jason> ??? print "$rule is in $service \n"; Jason> } else { Jason> ??? print "$rule isn't in $service \n"; Jason> } The answer depends on what you man by "in", when you say that "$rule is in $service". If you mean substring inclusion ("is $rule a substring of $service?"), then you want "index". man perlfunc. If you mean to interpret the contents of $rule as a regular expression, and see if that regular expression matches something in $service, then you want the bind operator "=~" and m// (or just //), possibly also involving \Q (as Tom P. points out in a later post) or qr//. All of this is covered in perlop and perlre. The best way to do either of these depends a fair bit on how many rules you plan on matching against how many services. t. TIMTOWTDI From joshua_keroes at eli.net Thu Jun 6 02:03:52 2002 From: joshua_keroes at eli.net (Joshua Keroes) Date: Wed Aug 4 00:05:35 2004 Subject: Matching stuff In-Reply-To: References: <20020605222630.GF28972@eli.net> Message-ID: <20020606070352.GM24354@eli.net> On (Wed, Jun 05 19:09), Tom Phoenix wrote: > On Wed, 5 Jun 2002, Joshua Keroes wrote: > > > if ( $service =~ /$rule/ ) { > > That's fine much of the time. But if there's a chance that $rule could > have metacharacters, or could be empty, you'll want to use \Q in front of > it. > > > if ( index $service, $rule ) { > > Eek! No, that's not right. Remember that index will return 0 (which is > false) if the substring is found at the beginning of the string, and -1 > (which is true) if it's not found at all. You must have meant this: > > if (-1 != index $service, $rule) { > > I don't think you were trying to be that silly... until the later items. I stand corrected. No, those first two were honest help. All the latter ones; well those were the silly ones. :-) Serves me right for not running anything first. -J TIMTOWTDI From kellert at ohsu.edu Thu Jun 6 18:54:12 2002 From: kellert at ohsu.edu (Tom Keller) Date: Wed Aug 4 00:05:35 2004 Subject: opendir problem Message-ID: Greetings, I always seem to have trouble with I/O ...grrrrr. When I run the following program (code snippet follows #####) I get the following: kellert% ~/bin/move_seq.pl Enter the parent directory for your input data directories: /Volumes/Core\ Lab/Vibrio/ Enter the parent directory for output of text files: /Volumes/Core\ Lab/Vibrio/current_vibrio.seq Can't open parent directory /Volumes/Core\ Lab/Vibrio/: No such file or directory at /Users/kellert/bin/move_seq.pl line 25, line 2. I'm stumped. Why doesn't opendir work with $parent?? ###### code snippet ########## print "Enter the parent directory for your input data directories: "; my $parent = ; chomp $parent; print "Enter the parent directory for output of text files: "; my $out_dir = ; chomp $out_dir; mkdir $out_dir unless -e $out_dir && -d _; ## check to see if exists, if not, make the directory chdir $parent; #change to input directory my (@all_text); opendir (DIR, $parent) or die "Can't open parent directory $parent: $!"; print "Parent_dir is $parent\n"; ## sanity check ######### The program dies at this point with "Can't open parent directory /Volumes/Core\ Lab/Vibrio/: No such file or directory at /Users/kellert/bin/move_seq.pl line 25, line 2." Any suggestions? Thanks, Tom K. -- Thomas J. Keller, Ph.D. MMI Research Core Facility Oregon Health & Science University 3181 SW Sam Jackson Park Rd Portland, Oregon 97201 TIMTOWTDI From tkil at scrye.com Thu Jun 6 20:20:04 2002 From: tkil at scrye.com (Tkil) Date: Wed Aug 4 00:05:35 2004 Subject: opendir problem In-Reply-To: References: Message-ID: >>>>> "Tom" == Tom Keller writes: Tom> kellert% ~/bin/move_seq.pl Tom> Enter the parent directory for your input data directories: Tom> /Volumes/Core\ Lab/Vibrio/ Tom> Enter the parent directory for output of text files: /Volumes/Core\ Tom> Lab/Vibrio/current_vibrio.seq Tom> Can't open parent directory /Volumes/Core\ Lab/Vibrio/: No such file Tom> or directory at /Users/kellert/bin/move_seq.pl line 25, line Tom> 2. Tom> I'm stumped. Why doesn't opendir work with $parent?? You don't need to backslash spaces in paths, unless they're going to be interpreted by the shell or other quote-mangling. (Or, you can do your own backslash processing.) Did you try it with no backslash at all? That is, just type in /Volumes/Core Lab/Vibrio and see if that works? More to the point, you're not checking the return value from "mkdir", which should have alerted you earlier that something was amiss. So, instead of | mkdir $out_dir unless -e $out_dir && -d _; Consider something like: | unless (-d $out_dir) | { | mkdir $out_dir | or die "couldn't mkdir '$out_dir': $!"; | } If you're worried about having to make multiple levels of directories, investigate the File::Path module. t. TIMTOWTDI From jasona at inetarena.com Tue Jun 11 14:24:18 2002 From: jasona at inetarena.com (Jason Annin-White) Date: Wed Aug 4 00:05:35 2004 Subject: NACHA Message-ID: <001801c2117d$9bebc7e0$6601a8c0@jasons> Has anybody on here worked with ACH before? Do you know if there are any Bundle::NACHA or ACH.pm modules being worked on anywhere? Since I would be considered an amature, I would rather contribute to another project than try to write a module on my own, but regardless, if there aren't any modules out there I'll be writing a lot of the code. It seems there is a standard for generating ACH objects, however, the differences in how institutions recieve them may be why there are no unifying packages? Don't know. Oh and it's a small world. Joe O, are you in my brother's friend's A+ class. I've never been asked about floppy drive access time before, and then twice in one week is pretty odd. Jason White -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.pm.org/archives/pdx-pm-list/attachments/20020611/22163001/attachment.htm From joe at joppegaard.com Tue Jun 11 15:17:11 2002 From: joe at joppegaard.com (Joe Oppegaard) Date: Wed Aug 4 00:05:35 2004 Subject: NACHA In-Reply-To: <001801c2117d$9bebc7e0$6601a8c0@jasons> Message-ID: On Tue, 11 Jun 2002, Jason Annin-White wrote: > > Oh and it's a small world. Joe O, are you in my brother's friend's A+ class. I've never been asked about floppy drive access time before, and then twice in one week is pretty odd. > Hah, yes, we must have had that class together. Though I'm not sure who your brother's friend is, tell him Hi for me. ;) > Jason White > -- -Joe Oppegaard 360.910.1970 http://joppegaard.com TIMTOWTDI From mikeraz at patch.com Wed Jun 12 13:24:28 2002 From: mikeraz at patch.com (mikeraz@patch.com) Date: Wed Aug 4 00:05:35 2004 Subject: [adam@personaltelco.net: [ptp] any mod_perl guru's out there?] Message-ID: <20020612112428.B1615@patch.com> Hey All, the local wireless (802.11b) group has an interactive map that shows the locations of wireless access nodes. The code is still in pre-beta development but demand has driven its "production" use. If anyone here could help Adam / the project team it would be a GoodThing(tm). ----- Forwarded message from Adam Shand ----- Hey all. I need a way to restrict the amount of resources that the map server can use. It's continuously chewing up all the RAM on maus (the PTP server) and starving the process table which essentially means that it can't start any new processes and kills the server. As I'm sure you can imagine this gets pretty annoying. :-) After the last crash I restricted apache down to a maximum of 3 servers and 50 concurrent sessions in the hope that it would deny people access rather then serve more requests then it can handle but this doesn't seem to have made a difference, in fact if anything it seems to have made it worse. At this point all I really care about is that I can leave the map server running and it doesn't crash the server every couple of days. If that means only a couple people can view it at a time ... I'm down with that. Currently the best solution I have is a cronjob that stops the map server everytime the load goes over 10, but that's ugly and I suspect not terribly reliable. If anyone out there knows this stuff I sure would appreciate hearing from you as mod_perl is pretty much voodoo to me. Thanks, Adam. -- "The first casualty, when war comes, is truth." -- Senator Hiram Johnson -- The Personal Telco Project - http://www.personaltelco.net/ Un/Subscribe: http://lists.personaltelco.net/mailman/listinfo/ptp/ Archives: http://lists.personaltelco.net/pipermail/ptp/ Etiquette: http://www.personaltelco.net/index.cgi/MailingListEtiquette ----- End forwarded message ----- -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity "They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety." -- Benjamin Franklin and the fortune cookie says: My weight is perfect for my height -- which varies. TIMTOWTDI From merlyn at stonehenge.com Wed Jun 12 14:22:32 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: [adam@personaltelco.net: [ptp] any mod_perl guru's out there?] In-Reply-To: <20020612112428.B1615@patch.com> References: <20020612112428.B1615@patch.com> Message-ID: <868z5knvyf.fsf@blue.stonehenge.com> >>>>> "mikeraz" == mikeraz writes: mikeraz> I need a way to restrict the amount of resources that the map mikeraz> server can use. It's continuously chewing up all the RAM on mikeraz> maus (the PTP server) and starving the process table which mikeraz> essentially means that it can't start any new processes and mikeraz> kills the server. mikeraz> As I'm sure you can imagine this gets pretty annoying. :-) mikeraz> After the last crash I restricted apache down to a maximum of mikeraz> 3 servers and 50 concurrent sessions in the hope that it mikeraz> would deny people access rather then serve more requests then mikeraz> it can handle but this doesn't seem to have made a mikeraz> difference, in fact if anything it seems to have made it mikeraz> worse. Are you using a reverse proxy? Are you caching what you can? Are you serving non-mod_perl stuff with a non-mod_perl servers? I recently got a lot insight about working with mod_proxy and caching servers while rebuilding www.stonehenge.com. If you could use my help, let me know. I gave a presentation to pdx.pm last month about my new structure. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From cp at onsitetech.com Wed Jun 12 14:51:54 2002 From: cp at onsitetech.com (Curtis Poe) Date: Wed Aug 4 00:05:35 2004 Subject: Meeting tonight References: <20020612112428.B1615@patch.com> Message-ID: <008d01c2124a$9a67d0d0$1a01a8c0@ot.onsitetech.com> Hi all, Don't forget the meeting tonight about POE (no, it's not about me :) Details are at http://portland.pm.org/. I look forward to seeing all of you there. -- Cheers, Curtis Poe Senior Programmer ONSITE! Technology, Inc. www.onsitetech.com 503-233-1418 Taking e-Business and Internet Technology To The Extreme! TIMTOWTDI From mikeraz at patch.com Wed Jun 12 17:10:30 2002 From: mikeraz at patch.com (mikeraz@patch.com) Date: Wed Aug 4 00:05:35 2004 Subject: file test for file being open by another process? Message-ID: <20020612151030.B3476@patch.com> reviewing `man perlfunc` didn't solve this for me. I'd like to know if another process has a file open on a system. The file in question is being copied from one system on a network to another (read: Windows users are copying files from their local drive to the Samba share point on my Linux server.) When it fully arrives on the second system my process should begin to work on it. I was hoping to structure the code like: if ( ! -X filename ) { process away; } Where X would be the appropriate file test for "open by another process". If that is not available I'll monitor for stabilization of size and [mca]time. Unless you have a better suggestion ... :) -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity "They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety." -- Benjamin Franklin and the fortune cookie says: Tax reform means "Don't tax you, don't tax me, tax that fellow behind the tree." -- Russell Long TIMTOWTDI From merlyn at stonehenge.com Wed Jun 12 17:55:16 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: file test for file being open by another process? In-Reply-To: <20020612151030.B3476@patch.com> References: <20020612151030.B3476@patch.com> Message-ID: <86y9dkksyz.fsf@blue.stonehenge.com> >>>>> "mikeraz" == mikeraz writes: mikeraz> I'd like to know if another process has a file open on a system. There's no core (POSIX?) Unix API to know that. You can crawl through kernel data structures, ala "ofiles" or "lsof" to figure it out, but that's non-portable. Even if the file was open, you could rename it somewhere else, and it'd still keep writing to the new place. Or delete it, and it becomes a nameless file, that is still open. mikeraz> The file in question is being copied from one system on a network mikeraz> to another (read: Windows users are copying files from their local mikeraz> drive to the Samba share point on my Linux server.) mikeraz> When it fully arrives on the second system my process should begin mikeraz> to work on it. Your two choices are cooperatively, or ad-hoc. Cooperatively: have your client write the file into FOO.tmp, then rename when done to FOO.done. You don't touch the FOO.tmp files. Ad hoc: wait until -A $file > 0.1 Sorry, that's all you get. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From mikeraz at patch.com Thu Jun 13 10:38:35 2002 From: mikeraz at patch.com (mikeraz@patch.com) Date: Wed Aug 4 00:05:35 2004 Subject: brain hurt and file test Message-ID: <20020613083835.A10550@patch.com> We tracked it down. The -[AMC] file test . . . well as `man perlfunc` will tell you does not return the [acm]time of the file. They: -M Age of file in days when script started. -A Same for access time. -C Same for inode change time. Ah, when the script started. Ah, express in days. (??) Note, if the file changed since the time the script started this number will be negative. Note, it is expressed in days. Multiply by 86400 if you want the value expressed in seconds. So if you want to know how long a file has not been mucked with (as I do), use the stat function. -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity "They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety." -- Benjamin Franklin and the fortune cookie says: Q: How do you play religious roulette? A: You stand around in a circle and blaspheme and see who gets struck by lightning first. TIMTOWTDI From merlyn at stonehenge.com Thu Jun 13 10:53:46 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: brain hurt and file test In-Reply-To: <20020613083835.A10550@patch.com> References: <20020613083835.A10550@patch.com> Message-ID: <86lm9ji391.fsf@blue.stonehenge.com> >>>>> "mikeraz" == mikeraz writes: mikeraz> We tracked it down. The -[AMC] file test . . . well as `man perlfunc` mikeraz> will tell you does not return the [acm]time of the file. They: mikeraz> -M Age of file in days when script started. mikeraz> -A Same for access time. mikeraz> -C Same for inode change time. mikeraz> Ah, when the script started. Ah, express in days. (??) mikeraz> Note, if the file changed since the time the script started this number will mikeraz> be negative. mikeraz> Note, it is expressed in days. Multiply by 86400 if you want the value mikeraz> expressed in seconds. mikeraz> So if you want to know how long a file has not been mucked with (as I do), mikeraz> use the stat function. Or add $^T = time; to the top of your checking loop, which changes the offset used by -A to *now*. Actually, then: $^T = time - 30; if (-A $file > 0) { # file was last modified more than 30 secs ago } would be a cheap way to do this without scaling for days/seconds. :) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From robb at empire2.com Thu Jun 13 13:43:36 2002 From: robb at empire2.com (Rob Bloodgood) Date: Wed Aug 4 00:05:35 2004 Subject: Meeting tonight In-Reply-To: <008d01c2124a$9a67d0d0$1a01a8c0@ot.onsitetech.com> Message-ID: > Don't forget the meeting tonight about POE (no, it's not about me :) > > Details are at http://portland.pm.org/. I look forward to seeing > all of you there. dangit dangit dangit dangit I *knew* there was something I was forgetting last nite! Rob TIMTOWTDI From poec at yahoo.com Thu Jun 13 14:09:15 2002 From: poec at yahoo.com (Ovid) Date: Wed Aug 4 00:05:35 2004 Subject: Meeting tonight In-Reply-To: Message-ID: <20020613190915.74531.qmail@web9101.mail.yahoo.com> --- Rob Bloodgood wrote: > > Don't forget the meeting tonight about POE (no, it's not about me :) > > > > Details are at http://portland.pm.org/. I look forward to seeing > > all of you there. > > dangit dangit dangit dangit I *knew* there was something I was forgetting > last nite! > > Rob Rob, You missed a great meeting. About 20 people showed up for the technical meeting, 2 of them (myself being one) managed to sit on desk returns that weren't attached and sent them crashing to the floor, and Todd Caine gave a great presentation. The social stuff before and after was also pretty good. As for the next meeting, which should be Wednesday, July 10th, same time and place, chromatic will be giving a presentation about testing. chromatic and Michael Schwern have been doing considerable work pushing testing on an unsuspecting Perl world and they have had great success with this. Don't miss this meeting! I've had the pleasure of working with chromatic and I am very pleased with the things I have learned. Cheers, Curtis "Ovid" Poe ===== "Ovid" on http://www.perlmonks.org/ Someone asked me how to count to 10 in Perl: push@A,$_ for reverse q.e...q.n.;for(@A){$_=unpack(q|c|,$_);@a=split//; shift@a;shift@a if $a[$[]eq$[;$_=join q||,@a};print $_,$/for reverse @A __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI From rb-pdx-pm at redcat.com Thu Jun 13 16:31:38 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: brain hurt and file test In-Reply-To: <86lm9ji391.fsf@blue.stonehenge.com> Message-ID: On 13 Jun 2002, Randal L. Schwartz wrote: > $^T = time - 30; > if (-A $file > 0) { > # file was last modified more than 30 secs ago > } > > would be a cheap way to do this without scaling for days/seconds. :) Just in case there's any confusion between the code and the comment: It's -M to get the "modification age" and -A to get the "accessed age". Randal knows this stuff so well that sometimes he doesn't read the comments. :-) As a rule of thumb, when thinking about -M, -A, and -C, you should nearly always use -M, which tells you "how old is the data in this file", or "how long has it been since this file was updated". The mtime (which is the underlying timestamp value used by Perl to calculate the -M value) is the timestamp used by 'ls -l' and make, and it's what most people think of as the "age" of the file. When in doubt, go with -M. Rarely you'll want -A, which tells you "how long has it been since somebody bothered looking at this data". That's mostly useful in identifying abandoned, junk files, like those in /tmp directories. If a file's atime is from last year, it's a pretty good guess that it doesn't contain frequently-accessed data. (There should be an option to grep to leave atimes unchanged when a grep fails, but I've never heard of a grep that has that option -- so, if you grep (even unsuccessfully) through a bunch of files, you're whacking their atimes.) But here's a real-world use for the atime: If one of your web pages (or images, or whatever) has an atime that's old, probably none of your active pages links to it. (Fans of old British comedies would ask that file, "Are you being served?") The -C filetest is almost never what you want. For those who really need to know, it's telling you "how long has it been since anything important (like its name, permissions, contents) changed about this file". The ctime is used for incremental backups; if the ctime is older than your previous incremental backup, you can skip this one. If you're not doing incremental backups, you can probably skip this one, too. One more confusing wrinkle. When you read from a file, the atime is always updated to the current time. When you write to a file, the mtime is always updated. That's simple. The catch is that (and somebody correct me if I'm wrong here) on some-but-not-all systems, when you write to a file, the system updates the mtime and ALSO the atime, as if you had read from the file as well. (There's a good reason for doing that, but I don't feel like setting out both sides of a moot argument here. This is almost a FMTEYEWTK as it is.) Check your machine's stat(2) for the honest truth about the three timestamps. And don't forget that, as somebody else pointed out, Perl's stat(), time, and $^T all use timestamps (numbers like 1024001900, always positive integers, usually up in the hundreds of millions or more), but -M and friends give ages in days (numbers like 3.14159, rarely integers, sometimes negative, not often larger than a few thousand). Of course, it's easy to work with either of these kinds of numbers, although it's best to use a module when things get tricky. my $mtime = -M $file; # age in days my $secs = 24 * 60 * 60 * $mtime; # age in secs print "That file is around $secs seconds old.\n"; --Tom "secs on my mind" Phoenix TIMTOWTDI From ckuskie at dalsemi.com Thu Jun 13 16:54:53 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: Parse::RecDescent and autotree Message-ID: <20020613145453.A17762@dalsemi.com> Hey! I've been learning Parse::RecDescent to use in a project here at work, parsing SPICE netlists. I decided that since this was my first attempt at Parse::RecDescent, to keep it relatively simple, and just to build a parser that would return a parse tree and then walk through it. I think I may have either found a bug, a documentation problem, or (more likely) a serious lapse in my understanding of P::RD. Below you'll see my test script, it's totally self contained, printing a bunch of output on STDOUT which are: 1) The two grammars that are generated. One uses explicit actions to generate the tree, according to the documentation, and the other uses the directive. 2) The first two levels of each parse tree. (I tried using Data::Dumper to look at the actual trees, but it complained "Can't handle type. at spicy line 74", the line where Dumper is called) According to the documentation, the two parse trees should be exactly the same, but they aren't. Here's some more details: perl -v: This is perl, version 5.005_03 built for sun4-solaris Parse::RecDescent 1.80 Can anyone of you help me understand what's different and why? Thanks, Colin Kuskie TIMTOWTDI From ckuskie at dalsemi.com Thu Jun 13 17:01:43 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: Parse::RecDescent and autotree In-Reply-To: <20020613145453.A17762@dalsemi.com>; from ckuskie@dalsemi.com on Thu, Jun 13, 2002 at 02:54:53PM -0700 References: <20020613145453.A17762@dalsemi.com> Message-ID: <20020613150143.B17762@dalsemi.com> On Thu, Jun 13, 2002 at 02:54:53PM -0700, Colin Kuskie wrote: Crap! Here's the script: #!/usr/local/bin/perl -w use strict; use lib '~/perl/modules'; use Parse::RecDescent; use Data::Dumper; my $grammar = <<'EOGRAMMAR'; #rules netlist : elements(s) { bless \%item, $item{__RULE__} } elements : comment_line | subckt | subckt_def # | comment { bless \%item, $item{__RULE__} } comment_line : '*' comment newline { bless \%item, $item{__RULE__} } subckt: 'X' name node(s) word(s?) newline { bless \%item, $item{__RULE__} } subckt_def: '.SUBCKT' name node(s) newline subckt(s) '.ENDS' word(s) newline { bless \%item, $item{__RULE__} } #terminals comment : /.*/ { bless {__VALUE__ => $item[1] }, $item{__RULE__} } name : /\w+/ { bless {__VALUE__ => $item[1] }, $item{__RULE__} } word: /\S+/ { bless {__VALUE__ => $item[1] }, $item{__RULE__} } node : /\w+(?!=)/ { bless {__VALUE__ => $item[2] }, $item{__RULE__} } newline : /\n/ { bless {__VALUE__ => $item[1] }, $item{__RULE__} } EOGRAMMAR my ($grammar1,$grammar2) = ($grammar,$grammar); $grammar1 =~ s///; $grammar2 =~ s/{\s*bless.+$//mg; my $p1 = Parse::RecDescent->new($grammar1) or die "bad grammar"; my $p2 = Parse::RecDescent->new($grammar2) or die "bad grammar"; undef $/; my $text = ; my $t1 = $p1->netlist($text) or die "bad netlist"; my $t2 = $p2->netlist($text) or die "bad netlist"; my ($tree1,$tree2); $tree1 = $t1->descend($tree1); $tree2 = $t2->descend($tree2); #print Dumper($t1); print join "\n", "#Actions", $grammar1, ('-'x60), '#', $grammar2, ('-'x60); print join "\n", "#Actions", $tree1, ('-'x60), '#', $tree2, ('-'x60); sub netlist::descend { my ($self,$tree) = @_; $tree .= "Netlist\n"; my $num = scalar @{ $self->{elements} }; $tree .= "Found $num elements\n"; $tree = join ':', $tree, keys %{$self}, "\n"; foreach (@{ $self->{elements} }) { $tree = join '', $tree, ref $_, "\n"; #$tree = join '', $tree, $_->descend($tree); } return $tree; } sub elements::descend { my ($self,$tree) = @_; $tree = join ':', $tree, keys %{$self}, "\n"; return $tree; } __DATA__ * # FILE NAME: /DESIGN/NWDC/91D35_PDX/SIM/HSPICE/CKUSKIE/AMP5T/HSPICES/ * schematic/netlist/amp5T.c.raw * Netlist output for hspiceS. * Generated on May 6 16:30:58 2002 * File name: colinLib_amp5T_schematic.s. * Subcircuit for cell: amp5T. * Generated for: hspiceS. * Generated on May 6 16:30:58 2002. XP2 AVDD LOAD GATE AVDD PCH4_D35W_1 M=104.0 XP1 AVDD GATE OUTN AVDD PCH4_D35W_2 M=1.0 XP3 AVDD OUTN OUTN AVDD PCH4_D35W_2 M=1.0 XN3 AGND IBAS IBAS AGND NCH4_D35W_3 M=1.0 XN2 AGND CS IBAS AGND NCH4_D35W_3 M=1.0 XN1 AGND GATE VREF CS NCH4_D35W_4 M=1.0 XN0 AGND OUTN INN CS NCH4_D35W_4 M=1.0 * File name: d35w_pch4_d35w_schematic.s. * Subcircuit for cell: pch4_d35w. * Generated for: hspiceS. * Generated on May 6 16:30:58 2002. * terminal mapping: B = B * D = D * G = G * S = S * End of subcircuit definition. * File name: d35w_nch4_d35w_schematic.s. * Subcircuit for cell: nch4_d35w. * Generated for: hspiceS. * Generated on May 6 16:30:58 2002. * terminal mapping: B = B * D = D * G = G * S = S * End of subcircuit definition. * File name: d35w_nch4_d35w_schematic.s. * Subcircuit for cell: nch4_d35w. * Generated for: hspiceS. * Generated on May 6 16:30:58 2002. * terminal mapping: B = B * D = D * G = G * S = S * End of subcircuit definition. * File name: d35w_pch4_d35w_schematic.s. * Subcircuit for cell: pch4_d35w. * Generated for: hspiceS. * Generated on May 6 16:30:58 2002. * terminal mapping: B = B * D = D * G = G * S = S * End of subcircuit definition. * Include files * End of Netlist .SUBCKT PCH4_D35W_2 B D G S XM0 D G S B MPGSXHD4I_PCH5V W=(102.0) L=(1.0) AD=+9.69000000E+01 AS=+9.69000000E+01 PD=+2.05900000E+02 PS=+2.05900000E+02 .ENDS PCH4_D35W_2 .SUBCKT NCH4_D35W_4 B D G S XM0 D G S B MNGSXHD4I_NCH5V W=(50.0) L=(1.0) AD=+4.75000000E+01 AS=+4.75000000E+01 PD=+1.01900000E+02 PS=+1.01900000E+02 .ENDS NCH4_D35W_4 .SUBCKT NCH4_D35W_3 B D G S XM0 D G S B MNGSXHD4I_NCH5V W=(20.0) L=(4.0) AD=+1.90000000E+01 AS=+1.90000000E+01 PD=+4.19000000E+01 PS=+4.19000000E+01 .ENDS NCH4_D35W_3 .SUBCKT PCH4_D35W_1 B D G S XM0 D G S B MPGSXHD4I_PCH5V W=(200.0) L=(1.0) AD=+1.90000000E+02 AS=+1.90000000E+02 PD=+4.01900000E+02 PS=+4.01900000E+02 .ENDS PCH4_D35W_1 TIMTOWTDI From merlyn at stonehenge.com Thu Jun 13 17:14:02 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: brain hurt and file test In-Reply-To: References: Message-ID: <86lm9ig72t.fsf@blue.stonehenge.com> >>>>> "Tom" == Tom Phoenix writes: Tom> Just in case there's any confusion between the code and the comment: It's Tom> -M to get the "modification age" and -A to get the "accessed age". Randal Tom> knows this stuff so well that sometimes he doesn't read the comments. :-) Tom, how many times have I said "not in front of the children"? :-) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From lemming at attbi.com Fri Jun 14 00:09:03 2002 From: lemming at attbi.com (Mark Morgan) Date: Wed Aug 4 00:05:35 2004 Subject: brain hurt and file test References: Message-ID: <3D097A6F.5040203@attbi.com> Tom Phoenix wrote: > > Just in case there's any confusion between the code and the comment: It's > -M to get the "modification age" and -A to get the "accessed age". Randal > knows this stuff so well that sometimes he doesn't read the comments. :-) > > As a rule of thumb, when thinking about -M, -A, and -C, you should nearly > always use -M, which tells you "how old is the data in this file", or "how > long has it been since this file was updated". The mtime (which is the > underlying timestamp value used by Perl to calculate the -M value) is the > timestamp used by 'ls -l' and make, and it's what most people think of as > the "age" of the file. When in doubt, go with -M. > > The -C filetest is almost never what you want. For those who really need > to know, it's telling you "how long has it been since anything important > (like its name, permissions, contents) changed about this file". The ctime > is used for incremental backups; if the ctime is older than your previous > incremental backup, you can skip this one. If you're not doing incremental > backups, you can probably skip this one, too. There is a case where the -C would be a better choice. If a file is being copied with attributes preserved, once the copy is done the -M time will show whatever the original file started as. This shouldn't be a problem in your case unless a file gets stamped with a future time. -Mark TIMTOWTDI From rb-pdx-pm at redcat.com Sun Jun 16 11:39:53 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: Parse::RecDescent and autotree In-Reply-To: <20020613150143.B17762@dalsemi.com> Message-ID: On Thu, 13 Jun 2002, Colin Kuskie wrote: > use lib '~/perl/modules'; Does that do anything useful? The tilde is a shortcut used by some shells to refer to home directories, but AFAIK this merely makes your @INC have a path containing a tilde. (So, it could be useful if you have a directory named tilde in your current working dir.... :-) You could use your HOME environment variable, if you must, but it's probably better to explicitly use the directory's correct name. > use Data::Dumper; You said in your earlier message that Data::Dumper complained about something. From the looks of things, DD should be giving you a better error message than that; maybe you should find out why it's complaining and improve the diagnostic. (Of course, a newer version of the module may already have fixed this, so check that first.) My guess: It got confused by a regex from qr//. (Those didn't exist when DD was written, I'm pretty sure.) I'm sorry that I don't have more help to offer here, but I couldn't let your question go completely unanswered. :-) Good luck with it! --Tom Phoenix TIMTOWTDI From ckuskie at dalsemi.com Mon Jun 17 11:06:39 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: Parse::RecDescent and autotree In-Reply-To: ; from rb-pdx-pm@redcat.com on Sun, Jun 16, 2002 at 09:39:53AM -0700 References: <20020613150143.B17762@dalsemi.com> Message-ID: <20020617090639.A21077@dalsemi.com> On Sun, Jun 16, 2002 at 09:39:53AM -0700, Tom Phoenix wrote: > On Thu, 13 Jun 2002, Colin Kuskie wrote: > > > use lib '~/perl/modules'; > > Does that do anything useful? Actually, since P::RD isn't installed anywhere else aside from my private modules directory, it does something very useful, and it works :) > You could use your HOME > environment variable, if you must, but it's probably better to explicitly > use the directory's correct name. Ideally, when I release the code to the rest of the group, P::RD should be added to our perl install, so I won't need it. > > use Data::Dumper; > > You said in your earlier message that Data::Dumper complained about > something. From the looks of things, DD should be giving you a better > error message than that; maybe you should find out why it's complaining > and improve the diagnostic. (Of course, a newer version of the module may > already have fixed this, so check that first.) My guess: It got confused > by a regex from qr//. (Those didn't exist when DD was written, I'm pretty > sure.) Exactly right. Once I understood that, I quickly moved the script over to a machine with a more modern perl/DD (our Solaris workstations still run Solaris 2.6, and perl 5.005_03) then DD worked great. However, it still doesn't explain why P::RD doesn't work as advertised. Maybe I should send this over to Dr. Conway and see what shakes. Colin TIMTOWTDI From rb-pdx-pm at redcat.com Mon Jun 17 14:24:38 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:35 2004 Subject: Parse::RecDescent and autotree In-Reply-To: <20020617090639.A21077@dalsemi.com> Message-ID: On Mon, 17 Jun 2002, Colin Kuskie wrote: > > > use lib '~/perl/modules'; > > > > Does that do anything useful? > > Actually, since P::RD isn't installed anywhere else aside from my > private modules directory, it does something very useful, and it works > :) Huh? At least one of us is confused. Let's see whether it's me. :-) My contention is that that line does not make Perl look in a subdirectory of your home directory for modules, although it pretends to do that. Instead, it looks in your _current_ directory for a directory with the unlikely name of '~'. If that's not found (and I suspect it's not) it goes on to search the rest of your @INC. (So, if perl is finding your modules, my contention is that it would do so even without this line. Maybe your module dir is listed in your default @INC, or in the PERL5LIB environment variable, for example. Of course, you could comment-out the 'use lib' line to find out whether it's really needed.) If using a tilde like this in 'use lib' is supposed to work, it's not documented anywhere that I can find. How could we test this? Hmmm... Well, if ~ is your home dir, and if it's not nested more than three directories below the root dir, then ~/../../../ should be synonymous with the root dir. (Try 'ls ~/../../../' to see what I mean.) So, here's some code: #!/usr/bin/perl -w my $root_dir; BEGIN { $root_dir = '~/../../../'; # Here's the fixer: # $root_dir =~ s#~#$ENV{HOME}#; for (@INC) { s#^/#$root_dir#; } } use lib '~/perl/modules'; print "Yep, that worked: $root_dir is a synonym of the root dir.\n"; On my system, it fails as expected. If I uncomment the "fixer", it works. Do you see something different on yours? (You may need to change the "fixer" if you don't have a $HOME environment variable. And if your homedir is in /mnt/remote/offsite/somewhere/else/users/yeah/right, you'll need to add a few more dot-dots to $root_dir in the first place.) (Aside for the adventurous: Consider the command 'ln -s ~ ./~'. That is another way to make this program work, but don't do it unless you know what it does and how to undo it.) > However, it still doesn't explain why P::RD doesn't work as advertised. > Maybe I should send this over to Dr. Conway and see what shakes. Good idea. It's probably all his fault. :-) --Tom Phoenix TIMTOWTDI From ckuskie at dalsemi.com Mon Jun 17 16:05:14 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: Parse::RecDescent and autotree In-Reply-To: ; from rb-pdx-pm@redcat.com on Mon, Jun 17, 2002 at 12:24:38PM -0700 References: <20020617090639.A21077@dalsemi.com> Message-ID: <20020617140514.B21077@dalsemi.com> On Mon, Jun 17, 2002 at 12:24:38PM -0700, Tom Phoenix wrote: > On Mon, 17 Jun 2002, Colin Kuskie wrote: > > > > > use lib '~/perl/modules'; > > > > > > Does that do anything useful? > > > > Actually, since P::RD isn't installed anywhere else aside from my > > private modules directory, it does something very useful, and it works > > :) > > Huh? At least one of us is confused. Let's see whether it's me. :-) Nope, it's me. After removing the use lib, I got to wondering why it worked, since P::RD isn't installed anywhere else, and after some: perl -V env | less grep PERL5LIB .cshrc* .alias* I found this: setenv PERL5LIB /users/ckuskie/perl/modules and it became clear. Thanks, Tom! Colin TIMTOWTDI From karic at lclark.edu Thu Jun 20 17:33:19 2002 From: karic at lclark.edu (Kari Chisholm) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net Message-ID: <3D12582F.53889ED9@lclark.edu> Friends-- Here's the situation. I'm building a mission-critical web-based database. It's hosted on a server at, say, http://www.serverone.foo/bigdata.cgi. If that server goes down, even for an hour, I want to be able to immediately tell my clients to switch to another server (at another location) at, say, http://www.servertwo.foo/bigdata.cgi. In order to accomplish this kind of seamless transition, I need to ensure that data is backed up across the net in more-or-less real time - perhaps, every 5 minutes we ship changes to the data across the net. Is there an obvious solution or technology that I should be thinking of? I'm almost exclusively a Perl guy, so solutions with Perl are a good thing, but I'm open to other thoughts... Thanks! -kari. TIMTOWTDI From merlyn at stonehenge.com Thu Jun 20 17:47:58 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net In-Reply-To: <3D12582F.53889ED9@lclark.edu> References: <3D12582F.53889ED9@lclark.edu> Message-ID: <861yb18t41.fsf@blue.stonehenge.com> >>>>> "Kari" == Kari Chisholm writes: Kari> In order to accomplish this kind of seamless transition, I need to Kari> ensure that data is backed up across the net in more-or-less real time Kari> - perhaps, every 5 minutes we ship changes to the data across the net. Kari> Is there an obvious solution or technology that I should be thinking Kari> of? I'm almost exclusively a Perl guy, so solutions with Perl are a Kari> good thing, but I'm open to other thoughts... There are near-real-time replication solutions for both MySQL and PostgreSQL. Of course, with Oracle, it's that way out of the box. :) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From cdawson at webiphany.com Thu Jun 20 19:42:07 2002 From: cdawson at webiphany.com (Chris Dawson) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> Message-ID: <3D12765E.6080603@webiphany.com> I've not looked heavily into it, but I know MySQL supports replication, which might be what you need. This seems like a DB issue rather than a programming issue. So, I would look into documentation for MySQL or Oracle or whatever database it is you are using before trying to write something like this in Perl. http://www.mysql.com/doc/R/e/Replication_Intro.html You are using a database, right? If you are creating your own database using perl scripts, well, then I guess it is a perl issue. :) Chris Kari Chisholm wrote: > Friends-- > > Here's the situation. I'm building a mission-critical web-based > database. It's hosted on a server at, say, > http://www.serverone.foo/bigdata.cgi. > > If that server goes down, even for an hour, I want to be able to > immediately tell my clients to switch to another server (at another > location) at, say, http://www.servertwo.foo/bigdata.cgi. > > In order to accomplish this kind of seamless transition, I need to > ensure that data is backed up across the net in more-or-less real time > - perhaps, every 5 minutes we ship changes to the data across the net. > > Is there an obvious solution or technology that I should be thinking > of? I'm almost exclusively a Perl guy, so solutions with Perl are a > good thing, but I'm open to other thoughts... > > Thanks! > > -kari. > TIMTOWTDI -- Chris Dawson http://www.webiphany.com/ Send email to [ x at webiphany dot com ] TIMTOWTDI From karic at lclark.edu Fri Jun 21 11:58:19 2002 From: karic at lclark.edu (Kari Chisholm) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <3D12765E.6080603@webiphany.com> Message-ID: <3D135B2B.59DB3695@lclark.edu> Chris & Randal-- That's right: I'm building it in Perl myself. The database size is actually quite small, but it's still mission-critical. Basically, it's a large stack of tiny little flatfiles. If I could write a script (called by cron) to look for all files that have changed, and then deliver them to someplace across the net - well, that'd be perfect. -kari. Chris Dawson wrote: > > I've not looked heavily into it, but I know MySQL supports replication, > which might be what you need. This seems like a DB issue rather than a > programming issue. So, I would look into documentation for MySQL or > Oracle or whatever database it is you are using before trying to write > something like this in Perl. > > http://www.mysql.com/doc/R/e/Replication_Intro.html > > You are using a database, right? If you are creating your own database > using perl scripts, well, then I guess it is a perl issue. :) > > Chris > > Kari Chisholm wrote: > > Friends-- > > > > Here's the situation. I'm building a mission-critical web-based > > database. It's hosted on a server at, say, > > http://www.serverone.foo/bigdata.cgi. > > > > If that server goes down, even for an hour, I want to be able to > > immediately tell my clients to switch to another server (at another > > location) at, say, http://www.servertwo.foo/bigdata.cgi. > > > > In order to accomplish this kind of seamless transition, I need to > > ensure that data is backed up across the net in more-or-less real time > > - perhaps, every 5 minutes we ship changes to the data across the net. > > > > Is there an obvious solution or technology that I should be thinking > > of? I'm almost exclusively a Perl guy, so solutions with Perl are a > > good thing, but I'm open to other thoughts... > > > > Thanks! > > > > -kari. > > TIMTOWTDI > > -- > Chris Dawson > http://www.webiphany.com/ > Send email to [ x at webiphany dot com ] TIMTOWTDI From todd_caine at eli.net Fri Jun 21 12:08:22 2002 From: todd_caine at eli.net (Todd Caine) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <3D12765E.6080603@webiphany.com> <3D135B2B.59DB3695@lclark.edu> Message-ID: <3D135D86.AFFF852C@eli.net> You could probably accomplish that with rsync mirroring. Main rsync site http://samba.anu.edu.au/rsync/ Rsync mirroring howto and FAQ http://sunsite.dk/info/guides/rsync/rsync-mirroring.html Todd Kari Chisholm wrote: > > Chris & Randal-- > > That's right: I'm building it in Perl myself. The database size is > actually quite small, but it's still mission-critical. > > Basically, it's a large stack of tiny little flatfiles. If I could > write a script (called by cron) to look for all files that have > changed, and then deliver them to someplace across the net - well, > that'd be perfect. > > -kari. > TIMTOWTDI From ckuskie at dalsemi.com Fri Jun 21 12:30:21 2002 From: ckuskie at dalsemi.com (Colin Kuskie) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net In-Reply-To: <3D135B2B.59DB3695@lclark.edu>; from karic@lclark.edu on Fri, Jun 21, 2002 at 09:58:19AM -0700 References: <3D12582F.53889ED9@lclark.edu> <3D12765E.6080603@webiphany.com> <3D135B2B.59DB3695@lclark.edu> Message-ID: <20020621103021.A29965@dalsemi.com> On Fri, Jun 21, 2002 at 09:58:19AM -0700, Kari Chisholm wrote: > > That's right: I'm building it in Perl myself. The database size is > actually quite small, but it's still mission-critical. > > Basically, it's a large stack of tiny little flatfiles. If I could > write a script (called by cron) to look for all files that have > changed, and then deliver them to someplace across the net - well, > that'd be perfect. I love Perl, but what you're trying to do has already been done with another tool called rsync. http://rsync.samba.org/ It does exactly what you want. It opens a connection between source and destination, checks to see which files have changed and by default only sends the differences. It's been optimized to be very fast and efficient. In fact, it's only problem is that last time I checked you couldn't embed it in perl :) When I worked at Cadence Software, we used it to mirror multi-Gb design databases nightly between England, Scotland, and both coasts of the U.S. Colin Colin TIMTOWTDI From cfrjlr at yahoo.com Fri Jun 21 16:27:07 2002 From: cfrjlr at yahoo.com (charles radley) Date: Wed Aug 4 00:05:35 2004 Subject: Next PDX.pm meeting ? Message-ID: <20020621212707.12075.qmail@web13408.mail.yahoo.com> Greetings from a new member, I was wondering when is the next pdx.pm meeting ? Looks like I just missed one. Best regards, Charles F. Radley tel (503)-579-4686 __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI From merlyn at stonehenge.com Fri Jun 21 16:52:58 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: Next PDX.pm meeting ? In-Reply-To: <20020621212707.12075.qmail@web13408.mail.yahoo.com> References: <20020621212707.12075.qmail@web13408.mail.yahoo.com> Message-ID: <864rfw47ut.fsf@blue.stonehenge.com> >>>>> "charles" == charles radley writes: charles> Greetings from a new member, charles> I was wondering when is the next pdx.pm meeting ? My calendar says 10 july, but it's quite possible I'm off my rocker again. :) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From poec at yahoo.com Fri Jun 21 17:54:07 2002 From: poec at yahoo.com (Ovid) Date: Wed Aug 4 00:05:35 2004 Subject: Next PDX.pm meeting ? In-Reply-To: <864rfw47ut.fsf@blue.stonehenge.com> Message-ID: <20020621225407.4345.qmail@web9102.mail.yahoo.com> --- "Randal L. Schwartz" wrote: > >>>>> "charles" == charles radley writes: > > charles> Greetings from a new member, > charles> I was wondering when is the next pdx.pm meeting ? > > My calendar says 10 july, but it's quite possible I'm off > my rocker again. :) Yes, Randal, you're off your rocker, but you have the date right ;) For the next meeting, chromatic will be giving a presentation on testing. You *don't* want to miss this one. Great stuff. Christian Brink is getting around to updating the Web site (http://portland.pm.org) with the information. Of course, since Emacs isn't installed on their server, Christian's probably going to be using Pico, so he'll have some trouble :) Cheers, Curtis ===== "Ovid" on http://www.perlmonks.org/ Someone asked me how to count to 10 in Perl: push@A,$_ for reverse q.e...q.n.;for(@A){$_=unpack(q|c|,$_);@a=split//; shift@a;shift@a if $a[$[]eq$[;$_=join q||,@a};print $_,$/for reverse @A __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI From merlyn at stonehenge.com Fri Jun 21 18:00:18 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:35 2004 Subject: Next PDX.pm meeting ? In-Reply-To: <20020621225407.4345.qmail@web9102.mail.yahoo.com> References: <20020621225407.4345.qmail@web9102.mail.yahoo.com> Message-ID: <86k7os2q65.fsf@blue.stonehenge.com> >>>>> "Ovid" == Ovid writes: Ovid> Christian Brink is getting around to updating the Web site (http://portland.pm.org) with the Ovid> information. Ovid> Of course, since Emacs isn't installed on their server, Christian's probably going to be using Ovid> Pico, so he'll have some trouble :) Emacs has a remote editing mode that automatically FTP's or RCP's the files back and forth. Of course, Emacs has _____ [fill in the blank with nearly anything]. :) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From elana at aracnet.com Fri Jun 21 18:27:19 2002 From: elana at aracnet.com (elana@aracnet.com) Date: Wed Aug 4 00:05:35 2004 Subject: Just another "how do I unsub from this list" squawk... :-/ Message-ID: <200206212327.g5LNRJ70019375@mail3.aracnet.com> As per subject line... apologies to all on the list. Tried on my own, could not find an answer... Thanks in advance. -Elana TIMTOWTDI From cp at onsitetech.com Fri Jun 21 18:10:15 2002 From: cp at onsitetech.com (Curtis Poe) Date: Wed Aug 4 00:05:35 2004 Subject: MCVE References: <20020621225407.4345.qmail@web9102.mail.yahoo.com> Message-ID: <00e901c21978$ce0b1d40$1a01a8c0@ot.onsitetech.com> Has anyone on the list ever used the MCVE (Mainstreet Credit Verification Engine) with Perl? We're trying to get this set up here at work and it's been giving me fits. The guy from mainstreet who's been helping me has been great, but he doesn't know Perl very well and their API documentations is rather poor. -- Cheers, Curtis Poe Senior Programmer ONSITE! Technology, Inc. www.onsitetech.com 503-233-1418 Taking e-Business and Internet Technology To The Extreme! TIMTOWTDI From joshua_keroes at eli.net Fri Jun 21 18:40:23 2002 From: joshua_keroes at eli.net (Joshua Keroes) Date: Wed Aug 4 00:05:35 2004 Subject: Just another "how do I unsub from this list" squawk... :-/ In-Reply-To: <200206212327.g5LNRJ70019375@mail3.aracnet.com> References: <200206212327.g5LNRJ70019375@mail3.aracnet.com> Message-ID: <20020621234023.GG4204@eli.net> On (Fri, Jun 21 23:27), elana@aracnet.com wrote: > As per subject line... apologies to all on the list. Tried on my own, could > not find an answer... Thanks in advance. Last thing I remember, I was Running for the door I had to find the passage back To the place I was before 'Relax,' said the night man, We are programmed to receive. You can checkout any time you like, but you can never leave! We're sorry, but your request has been denied. ONE OF US ONE OF US. :-D Wuv, Woshua TIMTOWTDI From jasona at inetarena.com Fri Jun 21 19:17:29 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> Message-ID: <001a01c21982$3310ff30$5b86fc83@archer> If you were using an opensource database engine, I would recomend using a redundant device to store the actual data files of your database. This can be done with a variety of methods, there are third party fail-over services, system integrated clustering solutions, and network attached storage devices that will make your files redundant. If I was using a clustered solution, simply run a copy of the DBMS on each node. If you are using network attached storage, or the commonly suggested rsync to keep files current, then you will still need to run your DBMS on two systems and setup either a content smart switch, or some DNS rules to do the failover. Since you are using perl flatfiles, rather than even a simple DBMS, you probably aren't going for an industry standard solution, meaning transparency and seamlessness, so you could use rsync on two systems and have your cgi test for the presence of the primary server and connect to the secondary server in a failure event. Also, be sure to back up your files regularly, because I believe it can be possible to propogate corrupted data to the fail-over system. I would be happy to discuss some industry standard solutions out of list. Jason White ----- Original Message ----- From: "Kari Chisholm" To: "Portland Perl Mongers" Sent: Thursday, June 20, 2002 3:33 PM Subject: auto-backup across the net > > Friends-- > > Here's the situation. I'm building a mission-critical web-based > database. It's hosted on a server at, say, > http://www.serverone.foo/bigdata.cgi. > > If that server goes down, even for an hour, I want to be able to > immediately tell my clients to switch to another server (at another > location) at, say, http://www.servertwo.foo/bigdata.cgi. > > In order to accomplish this kind of seamless transition, I need to > ensure that data is backed up across the net in more-or-less real time > - perhaps, every 5 minutes we ship changes to the data across the net. > > Is there an obvious solution or technology that I should be thinking > of? I'm almost exclusively a Perl guy, so solutions with Perl are a > good thing, but I'm open to other thoughts... > > Thanks! > > -kari. > TIMTOWTDI > TIMTOWTDI From tex at off.org Fri Jun 21 20:15:47 2002 From: tex at off.org (Austin Schutz) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net In-Reply-To: <001a01c21982$3310ff30$5b86fc83@archer>; from jasona@inetarena.com on Fri, Jun 21, 2002 at 05:17:29PM -0700 References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> Message-ID: <20020621181547.K3880@gblx.net> > > Here's the situation. I'm building a mission-critical web-based > > database. It's hosted on a server at, say, > > http://www.serverone.foo/bigdata.cgi. > > > > If that server goes down, even for an hour, I want to be able to ^^^^^^^^^^^^^^^^^^^^ > > immediately tell my clients to switch to another server (at another ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This part of the question has yet to be answered as far as I can tell. Of course I might have just spaced through the answer. How do you tell the clients? Presumably there's a simple way to send a redirect in a cgi, I just can't remember what it is. Austin TIMTOWTDI From jasona at inetarena.com Fri Jun 21 20:34:15 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> Message-ID: <003d01c2198c$ec691260$5b86fc83@archer> If we are talking about an internet, web-based setup, where client systems are accessing your databse, you have more immediate concerns on your hands than redundancy. If we are talking about a secure website, then the webserver should be the only system accessing the database, clients talk to the webserver and the webserver dishes out data. The webserver is the only one(s) who need to be updated. This can be done directly in your cgi(not the best solution) by testing for the presence of the required server/files before access, by a content-smart switch/router (hardware or software) which can re-route traffic to a different destination IP when/if the original fails, or by a round-robin DNS set up, where is the defualt hostname fails, the DNS table switches to the second. If clustering is setup, then it is entirly the responsability of the cluster and is transparent to the developer. If we are talking about a an intranet application where many clients are accessing your database, try switching to a more server based model which can more closly model the options above. ----- Original Message ----- From: "Austin Schutz" To: "Jason White" Cc: "Kari Chisholm" ; "Portland Perl Mongers" Sent: Friday, June 21, 2002 6:15 PM Subject: Re: auto-backup across the net > > > Here's the situation. I'm building a mission-critical web-based > > > database. It's hosted on a server at, say, > > > http://www.serverone.foo/bigdata.cgi. > > > > > > If that server goes down, even for an hour, I want to be able to > ^^^^^^^^^^^^^^^^^^^^ > > > immediately tell my clients to switch to another server (at another > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > This part of the question has yet to be answered as far as I > can tell. Of course I might have just spaced through the answer. > How do you tell the clients? Presumably there's a simple way to > send a redirect in a cgi, I just can't remember what it is. > > Austin > TIMTOWTDI > TIMTOWTDI From tex at off.org Fri Jun 21 20:48:50 2002 From: tex at off.org (Austin Schutz) Date: Wed Aug 4 00:05:35 2004 Subject: auto-backup across the net In-Reply-To: <003d01c2198c$ec691260$5b86fc83@archer>; from jasona@inetarena.com on Fri, Jun 21, 2002 at 06:34:15PM -0700 References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> <003d01c2198c$ec691260$5b86fc83@archer> Message-ID: <20020621184850.L3880@gblx.net> On Fri, Jun 21, 2002 at 06:34:15PM -0700, Jason White wrote: > If we are talking about an internet, web-based setup, where client systems > are accessing your databse, you have more immediate concerns on your hands > than redundancy. > > If we are talking about a secure website, then the webserver should be the > only system accessing the database, clients talk to the webserver and the > webserver dishes out data. The webserver is the only one(s) who need to be > updated. This can be done directly in your cgi(not the best solution) by > testing for the presence of the required server/files before access, Ok, this was my point, but it still doesn't answer the question of how you tell the client. You test, find that something is broken, and then what? > by a > content-smart switch/router (hardware or software) which can re-route > traffic to a different destination IP when/if the original fails, or by a > round-robin DNS set up, where is the defualt hostname fails, the DNS table > switches to the second. If clustering is setup, then it is entirly the > responsability of the cluster and is transparent to the developer. > These are true, but may be overkill or overly expensive. Also, I'm not sure if the DNS solution works if a connection is made. If a connection is made but the webserver or cgi returns an error, will the client be smart enough to use the other machines indicated by DNS? > If we are talking about a an intranet application where many clients are > accessing your database, try switching to a more server based model which > can more closly model the options above. > Intranet applications I've programmed have known ahead of time about the location of redundant database servers. This works pretty well in my experience, and there's no reason it couldn't be done here. I still think the question of how you tell a client to go somewhere else is valid. What happens when e.g. the cgi can't access _any_ database server? Maybe the answer is just fail - but then again, maybe access would work from the same cgi on a different host depending on the particulars of the problem. Austin TIMTOWTDI From jeff at vpservices.com Fri Jun 21 20:47:24 2002 From: jeff at vpservices.com (Jeff Zucker) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> Message-ID: <3D13D72C.3020801@vpservices.com> Austin Schutz wrote: >>>Here's the situation. I'm building a mission-critical web-based >>>database. It's hosted on a server at, say, >>>http://www.serverone.foo/bigdata.cgi. >>> >>>If that server goes down, even for an hour, I want to be able to >>> > ^^^^^^^^^^^^^^^^^^^^ > >>>immediately tell my clients to switch to another server (at another >>> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > This part of the question has yet to be answered as far as I > can tell. Of course I might have just spaced through the answer. > How do you tell the clients? Presumably there's a simple way to > send a redirect in a cgi If the server is down, it's a bit hard to redirect with a cgi on the server :-). -- Jeff TIMTOWTDI From cfrjlr at yahoo.com Fri Jun 21 20:53:56 2002 From: cfrjlr at yahoo.com (charles radley) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> <3D13D72C.3020801@vpservices.com> Message-ID: <3D13D8B3.F276AB0B@yahoo.com> Good point. High availability systems usually involve multiple processors. The main processor sends out a heartbeat signal to a separate monitoring processor. The monitoring processor checks the heartbeat arrives on time. If it times out then it can take whatever corrective action is programmed. These systems of course are not cheap. Jeff Zucker wrote: > Austin Schutz wrote: > > >>>Here's the situation. I'm building a mission-critical web-based > >>>database. It's hosted on a server at, say, > >>>http://www.serverone.foo/bigdata.cgi. > >>> > >>>If that server goes down, even for an hour, I want to be able to > >>> > > ^^^^^^^^^^^^^^^^^^^^ > > > >>>immediately tell my clients to switch to another server (at another > >>> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > > > This part of the question has yet to be answered as far as I > > can tell. Of course I might have just spaced through the answer. > > How do you tell the clients? Presumably there's a simple way to > > send a redirect in a cgi > > If the server is down, it's a bit hard to redirect with a cgi on the > server :-). > > -- > Jeff > > TIMTOWTDI TIMTOWTDI From jasona at inetarena.com Fri Jun 21 21:14:22 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> <003d01c2198c$ec691260$5b86fc83@archer> <20020621184850.L3880@gblx.net> Message-ID: <006401c21992$86826f90$5b86fc83@archer> Provided that the "database" is not stored directly on the webserver which is running the cgi, the connection state of the client/cgi is independant of the connection state of the cgi/database. A client will not lose its cgi session if the cgi is unable to access the database. A robust cgi can handle redirecting where it will make its database connection, there is never a need to inform the client. The subject of redundancy in webservers and maintaining client/server session data is a different issue. Jason White ----- Original Message ----- From: "Austin Schutz" To: "Jason White" Cc: "Kari Chisholm" ; "Portland Perl Mongers" Sent: Friday, June 21, 2002 6:48 PM Subject: Re: auto-backup across the net > On Fri, Jun 21, 2002 at 06:34:15PM -0700, Jason White wrote: > > If we are talking about an internet, web-based setup, where client systems > > are accessing your databse, you have more immediate concerns on your hands > > than redundancy. > > > > If we are talking about a secure website, then the webserver should be the > > only system accessing the database, clients talk to the webserver and the > > webserver dishes out data. The webserver is the only one(s) who need to be > > updated. This can be done directly in your cgi(not the best solution) by > > testing for the presence of the required server/files before access, > > Ok, this was my point, but it still doesn't answer the question of > how you tell the client. You test, find that something is broken, and then > what? > > > by a > > content-smart switch/router (hardware or software) which can re-route > > traffic to a different destination IP when/if the original fails, or by a > > round-robin DNS set up, where is the defualt hostname fails, the DNS table > > switches to the second. If clustering is setup, then it is entirly the > > responsability of the cluster and is transparent to the developer. > > > > These are true, but may be overkill or overly expensive. Also, I'm not > sure if the DNS solution works if a connection is made. If a connection is > made but the webserver or cgi returns an error, will the client be smart > enough to use the other machines indicated by DNS? > > > If we are talking about a an intranet application where many clients are > > accessing your database, try switching to a more server based model which > > can more closly model the options above. > > > Intranet applications I've programmed have known ahead of time > about the location of redundant database servers. This works pretty well > in my experience, and there's no reason it couldn't be done here. I still > think the question of how you tell a client to go somewhere else is valid. > What happens when e.g. the cgi can't access _any_ database server? Maybe > the answer is just fail - but then again, maybe access would work from the > same cgi on a different host depending on the particulars of the problem. > > Austin > TIMTOWTDI From tex at off.org Fri Jun 21 21:44:44 2002 From: tex at off.org (Austin Schutz) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net In-Reply-To: <006401c21992$86826f90$5b86fc83@archer>; from jasona@inetarena.com on Fri, Jun 21, 2002 at 07:14:22PM -0700 References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> <003d01c2198c$ec691260$5b86fc83@archer> <20020621184850.L3880@gblx.net> <006401c21992$86826f90$5b86fc83@archer> Message-ID: <20020621194444.M3880@gblx.net> > Provided that the "database" is not stored directly on the webserver which > is running the cgi, the connection state of the client/cgi is independant of > the connection state of the cgi/database. A client will not lose its cgi > session if the cgi is unable to access the database. A robust cgi can > handle redirecting where it will make its database connection, there is > never a need to inform the client. > No, this is not necessarily true. There may be times where the cgi is not able to make the database connection and would want to inform the client. Yes, there are probably ways to architect around this situation most of the time. It's beside the point though. The question is not "why would you do it?", the question is "how do you do it?". The question is not entirely trivial. The background process is not always a database, where (network connectivity allowing) you can simply point to a redundant server. An example: A webserver has an attached webcam, which is faulty. A backup webcam is hosted on a different server, administered by someone else. Is it possible to immediately redirect the person to the new site without user intervention? Austin TIMTOWTDI From cp at onsitetech.com Sat Jun 22 11:33:03 2002 From: cp at onsitetech.com (Curtis Poe) Date: Wed Aug 4 00:05:36 2004 Subject: How do I use email? Message-ID: <004001c21a0a$7b312a50$1a01a8c0@ot.onsitetech.com> ----- Original Message ----- From: "Curtis Poe" To: "Portland Perl Mongers" Sent: Saturday, June 22, 2002 9:02 AM > help Hmm... for those who may be curious about the odd little message above, that was supposed to be a "help" command sent to majordomo. Oops! Cheers, Curtis TIMTOWTDI From markymoon at attbi.com Sat Jun 22 17:06:23 2002 From: markymoon at attbi.com (MarkyMoon) Date: Wed Aug 4 00:05:36 2004 Subject: Next PDX.pm meeting ? References: <20020621225407.4345.qmail@web9102.mail.yahoo.com> <86k7os2q65.fsf@blue.stonehenge.com> Message-ID: <3D14F4DF.EE24BA67@attbi.com> "Randal L. Schwartz" wrote: > Of course, Emacs has _____ [fill in the blank with nearly anything]. :) Of course, Emacs has an article on this already. : ) -- MarkyMoon -- @a = ("a".."z"," ","-","\n");foreach $b ( 12,0,17,10,24,12,14,14,13,26,8,18,26,0,26, 22,0,13,13,0,27,1,4,26,15,4,17,11,26,7,0, 2,10,4,17) {print $a[$b]};print $a[28]; TIMTOWTDI From mikeraz at patch.com Mon Jun 24 11:24:24 2002 From: mikeraz at patch.com (mikeraz@patch.com) Date: Wed Aug 4 00:05:36 2004 Subject: copying files from Unix to Win system Message-ID: <20020624092424.A17035@patch.com> Hi All, I'm automating a process that receives text files via ftp and then mounts a SAMBA share to copy the files to a Windows based file server. The current method is open file, set $/ to "", slurp file, write file and close: open INFILE, "<$_" or die "cannot open local $_"; open OUTFILE, ">$tmnt/$dest/$_" or die "cannot open destination $_"; $fs = $/; undef $/; $cont = ; print OUTFILE $cont; $/ = $fs; close INFILE; close OUTFILE; Problem with this approach is that the line endings are not converted. Same problem for while() { print OUTFILE; } While I'm scratching my head over why this isn't being done automagically, I'm considering: @lines_in_file = ; foreach (@lines_in_file) { chomp; } $/ = "\015\012"; # from newlines in perlport foreach (@lines_in_file) { print OUTFILE "$_$/"; } But that feels wrong as in there has to be a better way wrong. Now if someone is going to tell me that Perl has a built in file copy funcation . . . . -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity "They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety." -- Benjamin Franklin and the fortune cookie says: "The chain which can be yanked is not the eternal chain." -- G. Fitch TIMTOWTDI From karic at lclark.edu Mon Jun 24 11:36:06 2002 From: karic at lclark.edu (Kari Chisholm) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <001a01c21982$3310ff30$5b86fc83@archer> <20020621181547.K3880@gblx.net> Message-ID: <3D174A76.6E6B5E38@lclark.edu> I've enjoyed the discussion on this part of my initial post. I think, however, I'd probably use the phone. -kari. Austin Schutz wrote: > > > > Here's the situation. I'm building a mission-critical web-based > > > database. It's hosted on a server at, say, > > > http://www.serverone.foo/bigdata.cgi. > > > > > > If that server goes down, even for an hour, I want to be able to > ^^^^^^^^^^^^^^^^^^^^ > > > immediately tell my clients to switch to another server (at another > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > This part of the question has yet to be answered as far as I > can tell. Of course I might have just spaced through the answer. > How do you tell the clients? Presumably there's a simple way to > send a redirect in a cgi, I just can't remember what it is. > > Austin TIMTOWTDI From karic at lclark.edu Mon Jun 24 11:42:29 2002 From: karic at lclark.edu (Kari Chisholm) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <3D12765E.6080603@webiphany.com> <3D135B2B.59DB3695@lclark.edu> Message-ID: <3D174BF5.6B997DC4@lclark.edu> Wow. Thanks for all the discussion on this one. I really am talking about a large stack of tiny little flatfiles (largest one would be maybe 100K). Is there a reason I couldn't just a write a Perl script that, on a cron schedule, would cycle through all the files looking for ones that have been changed since X time and then FTP them to a second server? -kari. Kari Chisholm wrote: > > Chris & Randal-- > > That's right: I'm building it in Perl myself. The database size is > actually quite small, but it's still mission-critical. > > Basically, it's a large stack of tiny little flatfiles. If I could > write a script (called by cron) to look for all files that have > changed, and then deliver them to someplace across the net - well, > that'd be perfect. > > -kari. > > Chris Dawson wrote: > > > > I've not looked heavily into it, but I know MySQL supports replication, > > which might be what you need. This seems like a DB issue rather than a > > programming issue. So, I would look into documentation for MySQL or > > Oracle or whatever database it is you are using before trying to write > > something like this in Perl. > > > > http://www.mysql.com/doc/R/e/Replication_Intro.html > > > > You are using a database, right? If you are creating your own database > > using perl scripts, well, then I guess it is a perl issue. :) > > > > Chris > > > > Kari Chisholm wrote: > > > Friends-- > > > > > > Here's the situation. I'm building a mission-critical web-based > > > database. It's hosted on a server at, say, > > > http://www.serverone.foo/bigdata.cgi. > > > > > > If that server goes down, even for an hour, I want to be able to > > > immediately tell my clients to switch to another server (at another > > > location) at, say, http://www.servertwo.foo/bigdata.cgi. > > > > > > In order to accomplish this kind of seamless transition, I need to > > > ensure that data is backed up across the net in more-or-less real time > > > - perhaps, every 5 minutes we ship changes to the data across the net. > > > > > > Is there an obvious solution or technology that I should be thinking > > > of? I'm almost exclusively a Perl guy, so solutions with Perl are a > > > good thing, but I'm open to other thoughts... > > > > > > Thanks! > > > > > > -kari. > > > TIMTOWTDI > > > > -- > > Chris Dawson > > http://www.webiphany.com/ > > Send email to [ x at webiphany dot com ] > TIMTOWTDI TIMTOWTDI From merlyn at stonehenge.com Mon Jun 24 12:09:21 2002 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net In-Reply-To: <3D174BF5.6B997DC4@lclark.edu> References: <3D12582F.53889ED9@lclark.edu> <3D12765E.6080603@webiphany.com> <3D135B2B.59DB3695@lclark.edu> <3D174BF5.6B997DC4@lclark.edu> Message-ID: <86ptygmwn2.fsf@blue.stonehenge.com> >>>>> "Kari" == Kari Chisholm writes: Kari> Wow. Thanks for all the discussion on this one. Kari> I really am talking about a large stack of tiny little flatfiles Kari> (largest one would be maybe 100K). Is there a reason I couldn't just Kari> a write a Perl script that, on a cron schedule, would cycle through Kari> all the files looking for ones that have been changed since X time and Kari> then FTP them to a second server? rsync would still be the best for this. it can transfer them securely with ssh (you can't be serious about using FTP!), and automatucally handles ownership and permission, and transfer using checksums, and even partial transfer restart. And it also transfers the file into a temp file then renames when complete, so you don't get a partially written file visible to the other side. rsync. use rsync. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! TIMTOWTDI From kellert at ohsu.edu Mon Jun 24 12:15:44 2002 From: kellert at ohsu.edu (Tom Keller) Date: Wed Aug 4 00:05:36 2004 Subject: copying files from Unix to Win system In-Reply-To: <20020624092424.A17035@patch.com> References: <20020624092424.A17035@patch.com> Message-ID: David Cross in his excellent book "Data Munging with Perl" suggests the following: #! /usr/local/bin/perl -w use strict; (@ARGV == 2) or die "Error" source and target formats not given."; my ($src, $tgt) = @ARGV; my %conv = ( CR => "\cM", ## Mac line ending LF => "\cJ", ## Unix CRLF => "\cM\cJ"); ## Windows $src = $conv{$src}; $tgt = $conv{$tgt}; $/ = $src; while () { s/$src/$tgt/go; ## now do your data munging } I do a lot of moving of text files between Unix, Mac and Windows machines, and I've found this sniplet really useful. Regards, Tom K. At 9:24 AM -0700 6/24/02, mikeraz@patch.com wrote: >Hi All, > >I'm automating a process that receives text files via ftp and then mounts >a SAMBA share to copy the files to a Windows based file server. > >The current method is open file, set $/ to "", slurp file, write >file and close: > > open INFILE, "<$_" or die "cannot open local $_"; > open OUTFILE, ">$tmnt/$dest/$_" or die "cannot open destination $_"; > $fs = $/; > undef $/; > $cont = ; > print OUTFILE $cont; > $/ = $fs; > close INFILE; > close OUTFILE; > > >Problem with this approach is that the line endings are not converted. Same >problem for > > > while() { > print OUTFILE; > } > >While I'm scratching my head over why this isn't being done automagically, >I'm considering: > > @lines_in_file = ; > foreach (@lines_in_file) { > chomp; > } > > $/ = "\015\012"; # from newlines in perlport > foreach (@lines_in_file) { > print OUTFILE "$_$/"; > } > > >But that feels wrong as in there has to be a better way wrong. > >Now if someone is going to tell me that Perl has a built in file >copy funcation . . . . > >-- > Michael Rasmussen aka mikeraz > Be appropriate && Follow your curiosity > > "They that give up essential liberty to obtain > temporary safety, deserve neither liberty nor safety." > -- Benjamin Franklin > > and the fortune cookie says: >"The chain which can be yanked is not the eternal chain." >-- G. Fitch >TIMTOWTDI -- Thomas J. Keller, Ph.D. MMI Research Core Facility Oregon Health & Science University 3181 SW Sam Jackson Park Rd Portland, Oregon 97201 TIMTOWTDI From karic at lclark.edu Mon Jun 24 13:08:31 2002 From: karic at lclark.edu (Kari Chisholm) Date: Wed Aug 4 00:05:36 2004 Subject: auto-backup across the net References: <3D12582F.53889ED9@lclark.edu> <3D12765E.6080603@webiphany.com> <3D135B2B.59DB3695@lclark.edu> <3D174BF5.6B997DC4@lclark.edu> <86ptygmwn2.fsf@blue.stonehenge.com> Message-ID: <3D17601F.A428D712@lclark.edu> "Randal L. Schwartz" wrote: > Kari> I really am talking about a large stack of tiny little flatfiles > Kari> (largest one would be maybe 100K). Is there a reason I couldn't just > Kari> a write a Perl script that, on a cron schedule, would cycle through > Kari> all the files looking for ones that have been changed since X time and > Kari> then FTP them to a second server? > > rsync would still be the best for this. it can transfer them securely > with ssh (you can't be serious about using FTP!), and automatucally > handles ownership and permission, and transfer using checksums, and > even partial transfer restart. And it also transfers the file into a > temp file then renames when complete, so you don't get a partially > written file visible to the other side. > > rsync. use rsync. OK. I knew there was a reason. Thanks! -kari. TIMTOWTDI From kellert at ohsu.edu Mon Jun 24 13:40:39 2002 From: kellert at ohsu.edu (Tom Keller) Date: Wed Aug 4 00:05:36 2004 Subject: CPAN problem Message-ID: I've been having trouble with my CPAN module since I installed an maintenance patch on my IRIX 6.5 system. Here's an example: (I su'd to root, and launched CPAN for perl 5.6.) cpan> m /dumper/ CPAN: Storable loaded ok Going to read /.cpan/Metadata CPAN: LWP::UserAgent loaded ok Fetching with LWP: ftp://ftp.cs.colorado.edu/pub/perl/CPAN/authors/01mailrc.txt.gz Fetching with Net::FTP: ftp://ftp.cs.colorado.edu/pub/perl/CPAN/authors/01mailrc.txt.gz Can't call method "sockport" on an undefined value at /usr/local/lib/perl5/site_perl/5.6.0/Net/FTP.pm line 789. Ignore if off-topic (or better yet redirect me to something appropriate). Thanks, Tom K. -- Thomas J. Keller, Ph.D. MMI Research Core Facility Oregon Health & Science University 3181 SW Sam Jackson Park Rd Portland, Oregon 97201 TIMTOWTDI From cfrjlr at yahoo.com Mon Jun 24 13:45:56 2002 From: cfrjlr at yahoo.com (charles radley) Date: Wed Aug 4 00:05:36 2004 Subject: copying files from Unix to Win system References: <20020624092424.A17035@patch.com> Message-ID: <3D1768E4.3E92C17C@yahoo.com> Mike, Please clarify.... mikeraz@patch.com wrote: > > Problem with this approach is that the line endings are not converted. Same > problem for > What do you mean ? Converted to what ? You are going from unix to windows, right ? That should be pretty painless. Are the lines getting concatenated together ? When converting from Windows to unix you often have to chomp the last whitespace character from the line. Need a bit more info to help you some more. Cheers, Charles R. TIMTOWTDI From cfrjlr at yahoo.com Mon Jun 24 14:00:23 2002 From: cfrjlr at yahoo.com (charles radley) Date: Wed Aug 4 00:05:36 2004 Subject: copying files from Unix to Win system References: <20020624092424.A17035@patch.com> Message-ID: <3D176C47.A6B3AFAE@yahoo.com> mikeraz@patch.com wrote: > > > Now if someone is going to tell me that Perl has a built in file copy funcation . . . . > Sometimes I use the standard perl Copy.pm module: >From the Activestate perl docs:- The File::Copy module provides two basic functions, copy and move, which are useful for getting the contents of a file from one place to another. The copy function takes two parameters: a file to copy from and a file to copy to. Either argument may be a string, a FileHandle reference or a FileHandle glob. Obviously, if the first argument is a filehandle of some sort, it will be read from, and if it is a file name it will be opened for reading. Likewise, the second argument will be written to (and created if need be). use File::Copy; copy("file1","file2"); Cheers, Charles F. Radley TIMTOWTDI From rb-pdx-pm at redcat.com Mon Jun 24 15:53:44 2002 From: rb-pdx-pm at redcat.com (Tom Phoenix) Date: Wed Aug 4 00:05:36 2004 Subject: CPAN problem In-Reply-To: Message-ID: On Mon, 24 Jun 2002, Tom Keller wrote: > Fetching with LWP: > ftp://ftp.cs.colorado.edu/pub/perl/CPAN/authors/01mailrc.txt.gz > Fetching with Net::FTP: > ftp://ftp.cs.colorado.edu/pub/perl/CPAN/authors/01mailrc.txt.gz Looks as if fetching with LWP didn't work. Maybe colorado.edu was down, or maybe there's something wrong with your LWP installation. > Can't call method "sockport" on an undefined value at > /usr/local/lib/perl5/site_perl/5.6.0/Net/FTP.pm line 789. Looks as if fetching the same thing with Net::FTP didn't work either. Hmmm.... CPAN.pm should probably have handled that error more gracefully. Can you get that file with an ordinary program, like a web browser, without difficulty? If so, either it's a transient error (like: too many concurrent users of the colorado.edu FTP server), or something is wrong with Net::FTP, LWP, or something that they both depend upon. If you can't get it, I'd say that taking colorado.edu out of your sites list would be a good step. And since you seem to be using 5.6.0, consider trying 5.6.1 - if it doesn't fix this bug for you, at least it should fix some others! Good luck! --Tom TIMTOWTDI From mikeraz at patch.com Mon Jun 24 18:13:13 2002 From: mikeraz at patch.com (mikeraz@patch.com) Date: Wed Aug 4 00:05:36 2004 Subject: copying files from Unix to Win system In-Reply-To: <20020624092424.A17035@patch.com>; from mikeraz@patch.com on Mon, Jun 24, 2002 at 09:24:24AM -0700 References: <20020624092424.A17035@patch.com> Message-ID: <20020624161313.C19999@patch.com> On Mon, Jun 24, 2002 at 09:24:24AM -0700, mikeraz@patch.com typed: > Hi All, > > I'm automating a process that receives text files via ftp and then mounts > a SAMBA share to copy the files to a Windows based file server. > > [snip of how I first did it] > > While I'm scratching my head over why this isn't being done automagically, > I'm considering: > > @lines_in_file = ; > foreach (@lines_in_file) { > chomp; > } > > $/ = "\015\012"; # from newlines in perlport > foreach (@lines_in_file) { > print OUTFILE "$_$/"; > } > > > But that feels wrong as in there has to be a better way wrong. > the key was to quit thinking of the file as a collection of lines. as Charles Radley suggested a s/ / / operation is the key to easing my discomfort. [ open the respective files as IFILE and OFILE ] undef $/; $slurp = ; $slurp =~ s/\012/\015\012/g; print OFILE $slurp; [ close all ] Handles the translation fine at the expense of assuming that the file fits into memory and other such things . . . . -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity "They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety." -- Benjamin Franklin and the fortune cookie says: A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it. -- Max Planck TIMTOWTDI From jasona at inetarena.com Fri Jun 28 17:16:56 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML Message-ID: <002301c21ef1$8558b4d0$5b86fc83@archer> Looking for some opinions on what the 'best' dynamic HTML cgi solution is, using perl, of course. All of my books on the subject are a little old, so I thought I'd ask around. Jason White -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.pm.org/archives/pdx-pm-list/attachments/20020628/378c0f52/attachment.htm From poec at yahoo.com Fri Jun 28 17:38:56 2002 From: poec at yahoo.com (Ovid) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML In-Reply-To: <002301c21ef1$8558b4d0$5b86fc83@archer> Message-ID: <20020628223856.83587.qmail@web9103.mail.yahoo.com> --- Jason White wrote: > Looking for some opinions on what the 'best' dynamic HTML cgi solution is, using perl, of > course. All of my books on the subject are a little old, so I thought I'd ask around. > > Jason White Jason, Can you define what you mean by "Dynamic HTML"? DHTML is a combination of JavaScript, CSS, and HTML, and different browsers react differently (older ones don't handle it at all). This is a client-side issue. Hower, I *think* what you mean is "how do I dynamically create a Web page?" If so, I'm very opinionated (just ask my ex-wife :), but if you can explain a little more about what you would like to accomplish with this (now and for the future), I'd be better able to comment. Cheers, Curtis "Ovid" Poe ===== "Ovid" on http://www.perlmonks.org/ Someone asked me how to count to 10 in Perl: push@A,$_ for reverse q.e...q.n.;for(@A){$_=unpack(q|c|,$_);@a=split//; shift@a;shift@a if $a[$[]eq$[;$_=join q||,@a};print $_,$/for reverse @A __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI From jasona at inetarena.com Fri Jun 28 18:08:05 2002 From: jasona at inetarena.com (Jason White) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML References: <20020628223856.83587.qmail@web9103.mail.yahoo.com> Message-ID: <002e01c21ef8$a9675780$5b86fc83@archer> > Can you define what you mean by "Dynamic HTML"? DHTML is a combination of JavaScript, CSS, and > HTML, and different browsers react differently (older ones don't handle it at all). This is a > client-side issue. Hower, I *think* what you mean is "how do I dynamically create a Web page?" > If so, I'm very opinionated (just ask my ex-wife :), but if you can explain a little more about > what you would like to accomplish with this (now and for the future), I'd be better able to > comment. > > Cheers, > Curtis "Ovid" Poe What I meant is, if you were writing a back-end perl application with a dynamically generated HTML interface that maintains states, uses templates, processes information, accesses a database, etc... Which perl modules would you(Everybody / Anybody) personally find most useful. I am currently using: Bundle::DBI (DBD::mysql) HTTP::Template CGI The reason I am using these modules is because I have "CGI Programming" by good ol' O'reilly, however, I saw some dates in the book that made me cringe, so... before I invest too much effort into eating these modules, I figured I would see if anybody out here has any strong opinions about which ones to use. Jason White TIMTOWTDI From poec at yahoo.com Fri Jun 28 19:04:06 2002 From: poec at yahoo.com (Ovid) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML In-Reply-To: <002e01c21ef8$a9675780$5b86fc83@archer> Message-ID: <20020629000406.38007.qmail@web9104.mail.yahoo.com> --- Jason White wrote: > What I meant is, if you were writing a back-end perl application with a > dynamically generated HTML interface that maintains states, uses templates, > processes information, accesses a database, etc... Which perl modules would > you(Everybody / Anybody) personally find most useful. > > I am currently using: > Bundle::DBI (DBD::mysql) > HTTP::Template > CGI Hmm... I don't know HTTP::Template. Are you sure you don't mean HTML::Template? :) I think you have good choices, but I prefer PostGres over MySQL, though I understand that MySQL has been making large strides towards becoming ACID compliant. As for the template system, I like Template::Toolkit because of the extreme power and flexibility that it gives me, but HTML::Template's not a bad choice if you don't do anything too complicated. TT can take a while to learn, but once you do, it's golden (and not restricted to HTML). For session managemenet, check out Apache::Session. For some reason, it doesn't appear to actually require Apache, so I think the namespace is rotten, but I understand that it's a great module (though I haven't used it). If you're doing straight CGI, many people seem fond of CGI::Application, so I'll mention that, but I haven't used it. In reading through the documentation, it does appear to encourage good coding practices, so you may wish to check it out. Cheers, Curtis "Ovid" Poe ===== "Ovid" on http://www.perlmonks.org/ Someone asked me how to count to 10 in Perl: push@A,$_ for reverse q.e...q.n.;for(@A){$_=unpack(q|c|,$_);@a=split//; shift@a;shift@a if $a[$[]eq$[;$_=join q||,@a};print $_,$/for reverse @A __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI From sparksc at hlyw.com Fri Jun 28 19:17:13 2002 From: sparksc at hlyw.com (Chris Sparks) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML Message-ID: Ovid, Can you recommend book/web site/perldoc other than the module documentation for TT? Sparky >>> Ovid 06/28/02 05:04PM >>> --- Jason White wrote: > What I meant is, if you were writing a back-end perl application with a > dynamically generated HTML interface that maintains states, uses templates, > processes information, accesses a database, etc... Which perl modules would > you(Everybody / Anybody) personally find most useful. > > I am currently using: > Bundle::DBI (DBD::mysql) > HTTP::Template > CGI Hmm... I don't know HTTP::Template. Are you sure you don't mean HTML::Template? :) I think you have good choices, but I prefer PostGres over MySQL, though I understand that MySQL has been making large strides towards becoming ACID compliant. As for the template system, I like Template::Toolkit because of the extreme power and flexibility that it gives me, but HTML::Template's not a bad choice if you don't do anything too complicated. TT can take a while to learn, but once you do, it's golden (and not restricted to HTML). For session managemenet, check out Apache::Session. For some reason, it doesn't appear to actually require Apache, so I think the namespace is rotten, but I understand that it's a great module (though I haven't used it). If you're doing straight CGI, many people seem fond of CGI::Application, so I'll mention that, but I haven't used it. In reading through the documentation, it does appear to encourage good coding practices, so you may wish to check it out. Cheers, Curtis "Ovid" Poe ===== "Ovid" on http://www.perlmonks.org/ Someone asked me how to count to 10 in Perl: push@A,$_ for reverse q.e...q.n.;for(@A){$_=unpack(q|c|,$_);@a=split//; shift@a;shift@a if $a[$[]eq$[;$_=join q||,@a};print $_,$/for reverse @A __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI TIMTOWTDI From poec at yahoo.com Fri Jun 28 23:12:02 2002 From: poec at yahoo.com (Ovid) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML In-Reply-To: Message-ID: <20020629041202.78247.qmail@web9105.mail.yahoo.com> --- Chris Sparks wrote: > Ovid, Can you recommend book/web site/perldoc other than the module documentation for TT? > > Sparky Running Weblogs with Slash, by chromatic and friends, published by O'Reilly, has information about TT, but I haven't had a chance to review it. Of course, reading the docs at the TT Web site is useful (http://www.template-toolkit.org/docs/plain/index.html) and I strongly recommend their tutorials (http://www.template-toolkit.org/docs/plain/Tutorial/index.html) which will help you quickly learn the basics. Cheers, Curtis "Ovid" Poe ===== "Ovid" on http://www.perlmonks.org/ Someone asked me how to count to 10 in Perl: push@A,$_ for reverse q.e...q.n.;for(@A){$_=unpack(q|c|,$_);@a=split//; shift@a;shift@a if $a[$[]eq$[;$_=join q||,@a};print $_,$/for reverse @A __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com TIMTOWTDI From cp at onsitetech.com Sat Jun 29 10:13:25 2002 From: cp at onsitetech.com (Curtis Poe) Date: Wed Aug 4 00:05:36 2004 Subject: a book I want References: <20020107152503.GB3673@shadowed.net> Message-ID: <000801c21f7f$84e3e270$1a01a8c0@ot.onsitetech.com> ----- Original Message ----- From: "Allison Randal" To: Sent: Monday, January 07, 2002 8:25 AM Subject: a book I want > Since you are all thinking about Perl 6 this week, riddle me this. I'm > sure O'Reilly or someone has thought of this (or something similar). I'm > just curious who. I really want this book. [snip] > Has anyone heard of such a creature, or rumors of such a creature? I understand that Damian Conway has something "planned", but I don't know the details yet. However, since the only guarantee about Perl 6 is that everything is in flux, I don't think there is any reasonable way such a thing could be constructed at this time. Can you imagine the wholesale revisions that would be necessary if, for some reason, they decided to change the concatenation operator again (what is this time, by the way?). For that reason, I don't think one will be coming out any time within the next year. Of course, since you mentioned that "the book should be released a few months before the full production version of Perl 6", then I suspect we'll wait at least two years for the book :( -- Cheers, Curtis Poe Senior Programmer ONSITE! Technology, Inc. www.onsitetech.com 503-233-1418 Taking e-Business and Internet Technology To The Extreme! TIMTOWTDI From mark_swayne at mac.com Sat Jun 29 14:50:15 2002 From: mark_swayne at mac.com (mark_swayne@mac.com) Date: Wed Aug 4 00:05:36 2004 Subject: Dynamic HTML In-Reply-To: <002e01c21ef8$a9675780$5b86fc83@archer> Message-ID: <6E9F28B7-8B99-11D6-A8DF-000A2792192A@mac.com> I am a big fan of HTML Mason. Verson 1.10 just came out and it makes using it as a CGI system as easy as using it with mod_perl, among other nice new features. See http://masonhq.com/ for more information. Mason allows you to mix Perl into your HTML files to make them dynamic. Mason files, called components, are compiled into Perl scripts and then cached. When a request is processed, then Mason only has to eval the cached Perl script. Mason's templating model uses special components, known as autohandlers and dhandlers. Autohandlers are templates that are automatically applied to any component in the same branch of the file system. Dhandlers provide a way to handle requests for files that do not exist. A common use of dhandlers is to map HTML files stored in a database onto an apparent filesystem. Mason was designed for use with mod_perl, and will perform best in a mod_perl environment. The latest version makes it easier to use in a CGI context, and I understand that it is also easier to use in a stand-alone script (I have not tried to do this). The docs do a better job of explaining all this than I do. Good luck, -Mark > > What I meant is, if you were writing a back-end perl application with a > dynamically generated HTML interface that maintains states, uses > templates, > processes information, accesses a database, etc... Which perl modules > would > you(Everybody / Anybody) personally find most useful. > > I am currently using: > Bundle::DBI (DBD::mysql) > HTTP::Template > CGI > > The reason I am using these modules is because I have "CGI Programming" > by > good ol' O'reilly, however, I saw some dates in the book that made me > cringe, so... before I invest too much effort into eating these > modules, I > figured I would see if anybody out here has any strong opinions about > which > ones to use. > > Jason White > > TIMTOWTDI TIMTOWTDI