From robbyrussell at pdxlug.org Mon Aug 4 11:43:59 2003 From: robbyrussell at pdxlug.org (Robby Russell) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] file2ps perl modules? Message-ID: <62908.66.93.77.251.1060015439.squirrel@webmail.pdxlug.org> Are there any CPAN modules that anyone has used that allow someone to from a command line (which will be from a web upload form) that would create a ps file from it? word->ps open office ->ps html -> xls ->ps Is there a module that will handle a majority of common file types? (emulate a printer perhaps?) -- Robby Russell Portland Linux User Group http://www.pdxlug.org/ From schwern at pobox.com Mon Aug 4 21:14:45 2003 From: schwern at pobox.com (Michael G Schwern) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] file2ps perl modules? In-Reply-To: <62908.66.93.77.251.1060015439.squirrel@webmail.pdxlug.org> References: <62908.66.93.77.251.1060015439.squirrel@webmail.pdxlug.org> Message-ID: <20030805021445.GE26365@windhund.schwern.org> On Mon, Aug 04, 2003 at 09:43:59AM -0700, Robby Russell wrote: > Are there any CPAN modules that anyone has used that allow someone to from > a command line (which will be from a web upload form) that would create a > ps file from it? > > word->ps > open office ->ps > html -> > xls ->ps > > Is there a module that will handle a majority of common file types? > (emulate a printer perhaps?) You could always try GNU a2ps. -- Michael G Schwern schwern@pobox.com http://www.pobox.com/~schwern/ My lips, your breast and a whole lotta strange arguing. From dstillwa at xprt.net Tue Aug 5 18:02:19 2003 From: dstillwa at xprt.net (Daniel C. Stillwaggon) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] file2ps perl modules? In-Reply-To: <62908.66.93.77.251.1060015439.squirrel@webmail.pdxlug.org> Message-ID: On Monday, Aug 4, 2003, at 09:43 US/Pacific, Robby Russell wrote: > Are there any CPAN modules that anyone has used that allow someone to > from > a command line (which will be from a web upload form) that would > create a > ps file from it? > > word->ps > open office ->ps > html -> > xls ->ps > > Is there a module that will handle a majority of common file types? > (emulate a printer perhaps?) iirc, the ghostscript distribution includes html2ps, pdf2ps, and a few others. They are not Perl modules, but you can call them from inside a perl script... possibly chained together to get what you need? HTH --------------------------------------------------------------------- Daniel C. Stillwaggon From jkeroes at eli.net Wed Aug 6 10:09:39 2003 From: jkeroes at eli.net (Joshua Keroes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting - Jeff Griffin Message-ID: <20030806150939.GH19937@eli.net> The next Portland Perl Mongers meeting is upon us: Jeff Griffin of Powells City of Books Wednesday, 13 August 2003, 6:30-? This is a man who munges some serious data for a living. Have you ever wondered where all the book listings, the pictures, the everything that www.powells.com displays come from? Here's a hint: everywhere. Jeff scrapes the web and funnels various data formats (XML, CSV, Excel etc.) and images into MySQL for product pages. Next Weds he'll tell us how he does it. Here's a hint: Perl. Thanks to Thomas Keller and his department's monetary donation, we have a shiny new location this month, a lab with Macs, PCs, and wireless-connectivity for the laptop-enabled among us: Room #122 (downstairs) Biomedical Info Communication Center (the BICC) Oregon Health and Sciences University (OHSU) Snacks and drink will be provided, paid for by t-shirt funds. Location notes: * The BICC is building 17 on the campus map. [1] * Free parking in lot 83 just up the hill and across the street, behind the OHSU Auditorium. * Follow the directions on OHSU's directions page [2] J [1] http://www.ohsu.edu/about/campusmap.html [2] http://www.ohsu.edu/about/directions.shtml From mikeraz at patch.com Thu Aug 7 18:03:25 2003 From: mikeraz at patch.com (Michael Rasmussen) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] corruption puzzlement Message-ID: <20030807230325.GB27500@patch.com> Focused question here. (sigger about the code behind my back ok?) We had text files arriving via scp with Unix style EOL characters that would eventually be used by Windows people. Had to convert the line endings in the files. So I created a pair of scripts to handle the task ( unix2dos not available on the system) check2convert runs continuously, sleepign for two minutes and then checking if there are new files in the directory to muss with, if so 2dos is called for each file. There is a group of files that arrives about 2:00am. This last Monday one of them showed up with 0 size. Normally this file (a large one) takes about 40 seconds to transfer between sites. I did some munging (eliminating the sleep and the file time stamp comparison) to try and duplicate the truncation. This raised two questions: 1) since the transfer takes 40 seconds and I loop every 120 seconds I'd expect to see 2dos trash the file every once in awhile. This hasn't happened. Huh?? 2) No matter what I did I couldn't replicate the trucate the file to 0 bytes behavior. Huh?? Is this pair of quickies potentially responsible for the 0 byte file we received earlier this week? Any ideas on why 2dos doesn't trash about 1 in 3 of the incoming files where the transfer time would overlap with the loop invocation? ################# Start of check2convert ############ #!/usr/bin/perl while (1) { $mtime_ref = (stat (".timestamp"))[9]; $now = time; utime $now, $now, ".timestamp"; @dir = `ls *.txt *.csv`; foreach $f (@dir) { chomp $f; $mtime_cmp = (stat ($f))[9]; if ( ($mtime_cmp > $mtime_ref) && -f $f ) { $cmd = "./2dos $f"; system $cmd ; } } sleep 120; } # while(1) ################## end of check2convert ############# ################## start of 2dos #################### #!/usr/bin/perl -i # slurp in a file and make it have dos line endings # be nice if I could do the test non destructively # open close open??? $eol = "\r\n"; $line = <>; if ($line =~ /\r\n/) { $/ = $eol; } chomp $line; print "$line$eol"; while(<>) { chomp; print "$_$eol"; } ############# end of 2dos ############################# -- Michael Rasmussen, Infrastructure Engineer Columbia Management Company, Portland, Oregon Michael.Rasmussen@ColumbiaManagement.com Desk: 971-925-6723 Desk: 503-973-6723 (deprecated) Cell: 503-209-6227 ------------------------------------------------------------------------------ NOTICE: This communication may contain confidential or other privileged information. If you are not the intended recipient, or believe that you have received this communication in error, please do not print, copy, retransmit, disseminate, or otherwise use the information. Also, please indicate to the sender that you have received this email in error, and delete the copy you received. Any communication that does not relate to official Columbia Management Group business is that of the sender and is neither given nor endorsed. Thank you. ============================================================================== ----- End forwarded message ----- -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity http://www.patch.com/ http://wiki.patch.com/ http://wiki.patch.com/index.php/BicycleCommuting The fortune cookie says: The two most beautiful words in the English language are "Cheque Enclosed." -- Dorothy Parker From kyle at silverbeach.net Thu Aug 7 19:08:40 2003 From: kyle at silverbeach.net (Kyle Hayes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] corruption puzzlement In-Reply-To: <20030807230325.GB27500@patch.com> References: <20030807230325.GB27500@patch.com> Message-ID: <200308071708.40426.kyle@silverbeach.net> On Thursday 07 August 2003 16:03, Michael Rasmussen wrote: > Focused question here. (sigger about the code behind my > back ok?) > > We had text files arriving via scp with Unix style EOL > characters that would eventually be used by Windows people. > Had to convert the line endings in the files. So I created > a pair of scripts to handle the task ( unix2dos not available > on the system) > > check2convert runs continuously, sleepign for two minutes and > then checking if there are new files in the directory to muss > with, if so 2dos is called for each file. > > There is a group of files that arrives about 2:00am. This > last Monday one of them showed up with 0 size. Normally this > file (a large one) takes about 40 seconds to transfer between > sites. I've seen this when something temporarily hangs SCP just at the wrong time. The action to create the file goes fine, then something burps on the network and no data is actually put into the file for a few seconds. If your program runs at just that time, it'll see a zero byte file. File creation is a different action from putting data into the file. Just because the file is there does not mean that the data is there yet. If you've got a Linux system and active disks, it is possible to get the data without having the directory stitched up yet too (depends on the filesystem). > I did some munging (eliminating the sleep and the file time stamp > comparison) to try and duplicate the truncation. This > raised two questions: > > 1) since the transfer takes 40 seconds and I loop every 120 seconds > I'd expect to see 2dos trash the file every once in awhile. This hasn't > happened. Huh?? Possibly luck? Heisenbugs generally work that way. > 2) No matter what I did I couldn't replicate the trucate the file to 0 > bytes behavior. If it is a timing issue as I mentioned above, it might be pretty hard to duplicate. I've only seen it a few times and I've got stuff that copies thousands of files daily that's been running for years. We worked around it by using sentinels at the end of the data and checking for file size. > Huh?? Is this pair of quickies potentially responsible for the 0 byte > file we received earlier this week? Any ideas on why 2dos doesn't trash > about 1 in 3 of the incoming files where the transfer time would overlap > with the loop invocation? > > ################# Start of check2convert ############ > #!/usr/bin/perl > > while (1) { > $mtime_ref = (stat (".timestamp"))[9]; > $now = time; > utime $now, $now, ".timestamp"; > > @dir = `ls *.txt *.csv`; > > foreach $f (@dir) { > chomp $f; > $mtime_cmp = (stat ($f))[9]; > if ( ($mtime_cmp > $mtime_ref) && -f $f ) { > $cmd = "./2dos $f"; > system $cmd ; > } > } > sleep 120; > } # while(1) Change the the program so that you wait until the file is at least 60 seconds old (if the longest file takes 40 seconds, give yourself some fudge factor). Your current "window" is now to 120 seconds ago roughly. You want to move the window back in time: (cheeseball code warning!): while (1) { $mtime_ref = (stat (".timestamp"))[9]; $now = time - 60; # shift our window back 60 seconds utime $now, $now, ".timestamp"; # time stamp in the past. @dir = `ls *.txt *.csv`; foreach $f (@dir) { chomp $f; $mtime_cmp = (stat ($f))[9]; # file must have shown up in a roughly two minute window # starting one minute ago and extending two minutes before that. # this gives the file time to "settle" (for all the data to be written). if ( ($mtime_cmp > $mtime_ref) && ($mtime_cmp <= $now) && -f $f ) { $cmd = "./2dos $f"; system $cmd ; } } sleep 120; Also note that if you can stat the file, it is probably there, so the -f may be redundant. Your guarding if statement can still in _missing_ a file altogether. You have a race condition. On a fast machine/network, it could happen. Here's the scenario: 1) at time 42, your program comes out of the sleep and starts running. It tags the timestamp file. 2) you get the directory listing into @dir, but it's still time 42. Fast disk, directory in cache, whatever. If your program runs a lot, you will have stuff in the d-cache on Linux (probably in some similar cache on most OSes except maybe Win 9x). 3) a remote SCP drops another file into the directory quickly. The mtime for the file is still time 42. But, remember that you got the directory listing in step 2. If all the steps 1-3 take less than a second, then you could miss the file dropped in step 3. The next time around the loop, you'll skip the new file because it has the same mtime as the timestamp file. I generally process files into different directories. The raw files land in one directory, and I move them to another directory after processing. This means that only files that need processing are in the input directory. The problem is actually a bit worse than it seems. Depending on the filesystem used, you may see that the file is created and the data inserted into it _before_ the directory entry is created along with the mtime. Thus, it is possible to have the file start being created before time 42, but finish and show up in step 3 above. I've seen up to five second delays on heavily loaded Linux systems running ext2 filesystems. Ext3 and Reiser running in journalling mode could actually have this problem worse than ext2. The WinNT filesystem can get really weird this way. On a heavily loaded system, I timed a file taking more than 30 seconds to show up in a directory after a copy operation said it was complete. Is there some sort of sentinel that you can look for at the end of the file? If the file is pretty big, just having a fudge factor delay isn't really a solution. It might alleviate the problem, but it won't solve it. > ################## end of check2convert ############# > > ################## start of 2dos #################### > #!/usr/bin/perl -i > > # slurp in a file and make it have dos line endings > # be nice if I could do the test non destructively > # open close open??? > > $eol = "\r\n"; > > $line = <>; > > if ($line =~ /\r\n/) { $/ = $eol; } > > chomp $line; > print "$line$eol"; > > while(<>) { > chomp; > print "$_$eol"; > } > ############# end of 2dos ############################# Erm, where does the output go? Are these programs sanitized to protect the innocent? Best, Kyle From dstillwa at xprt.net Thu Aug 7 22:47:11 2003 From: dstillwa at xprt.net (Daniel C. Stillwaggon) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Possibly OT DC question Message-ID: I'm not sure if this message is apropos for this list, but I've seen all sorts of things come down the line, so I thought I'd give it a whirl. :-) I'm relocating to the DC area and wondering if anyone has any pointers they might be able to give for a good contracting agency out there? By good I mean reasonably together and honest, as well as dealing in Perl jobs at least partly. By pointers I mean any names or numbers or personal horror/joyful stories you might feel willing to share. I've lived on the west coast (Oregon) for most of my life, and the DC job market looks like a foreign country! --------------------------------------------------------------------- Daniel C. Stillwaggon From mikeraz at patch.com Fri Aug 8 12:00:29 2003 From: mikeraz at patch.com (Michael Rasmussen) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] corruption puzzlement - followup In-Reply-To: <200308071708.40426.kyle@silverbeach.net> References: <20030807230325.GB27500@patch.com> <200308071708.40426.kyle@silverbeach.net> Message-ID: <20030808170029.GA30790@patch.com> Kyle, Thank you very much for the information on race conditions, SCP issues and directory entry creation latency. I'd asked my question as a post mortem for the problem we observed on Tuesday. The script was originally created so that users could do a Samba mount of the directory and pull the file when they received an email stating it was available. Now we've moved into a scenario where I move the files to other servers for the users and other processes to pick up. As a result I've killed the process that monitored the directory and moved the EOL conversion into the script that moves the files to the NT severs. code flow is now: opendir(DIR, $incoming_dir); @files = grep { /spec of interest/ } readdir(DIR); foreach $file (@files) { # non portable test for open file condition, you're warned @openfiles = `/usr/sbin/lsof +d $incoming_dir 2>/dev/null`; if ( ! ( grep (/$file/) @openfiles ) ) { proceed with munging and copying and backing up and all; } else { twiddle bits for a bit and try again } } Since the files are dropped into $incoming_dir by the tranfer process by the business partner and aren't touched by anything/one but my process I'm confident (let me know if I'm wrong here) that If I catch it on the readdir and if it's not open when I test for it then it has arrived on site completely and I'm free to do my stuff. The only race condition I see is that the file may actually close between the time I test for it being open and when I want to act on it, but that's OK since I'll return later. Lesson: Don't throw together a quickie while users are in testing mode and leave it there. On Thu, Aug 07, 2003 at 05:08:40PM -0700, Kyle Hayes wrote: > Change the the program so that you wait until the file is at least 60 seconds > old (if the longest file takes 40 seconds, give yourself some fudge factor). Then on a day of network congestion slowing things down or when the market grows crazy and the file size doubles I'm back where I started. Hence the defer action as long as possible approach I eventually took. > Also note that if you can stat the file, it is probably there, so the -f may > be redundant. That was polite. :) Yes, if I can stat it, it will pass the existance test. > I generally process files into different directories. The raw files land in > one directory, and I move them to another directory after processing. This > means that only files that need processing are in the input directory. Unstated in my original post, but that is exactly what we do. Files arrive in one directory and then are forwarded on to another machine and saved in an archive directory locally. > finish and show up in step 3 above. I've seen up to five second delays on > heavily loaded Linux systems running ext2 filesystems. Ext3 and Reiser > running in journalling mode could actually have this problem worse than ext2. > The WinNT filesystem can get really weird this way. On a heavily loaded > system, I timed a file taking more than 30 seconds to show up in a directory > after a copy operation said it was complete. Amazing. I had no idea. That's very important. > Is there some sort of sentinel that you can look for at the end of the file? > If the file is pretty big, just having a fudge factor delay isn't really a > solution. It might alleviate the problem, but it won't solve it. > > ################## start of 2dos #################### > > #!/usr/bin/perl -i > > [snip] > > ############# end of 2dos ############################# > > Erm, where does the output go? Are these programs sanitized to protect the > innocent? You missed the -i on the #! line, an in place edit with no backup. -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity http://www.patch.com/ http://wiki.patch.com/ http://wiki.patch.com/index.php/BicycleCommuting The fortune cookie says: Never reveal your best argument. From kyle at silverbeach.net Fri Aug 8 12:54:54 2003 From: kyle at silverbeach.net (Kyle Hayes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] corruption puzzlement - followup In-Reply-To: <20030808170029.GA30790@patch.com> References: <20030807230325.GB27500@patch.com> <200308071708.40426.kyle@silverbeach.net> <20030808170029.GA30790@patch.com> Message-ID: <200308081054.54830.kyle@silverbeach.net> On Friday 08 August 2003 10:00, Michael Rasmussen wrote: > Kyle, > > Thank you very much for the information on race conditions, > SCP issues and directory entry creation latency. I have some bitter experience in the area :-( > I'd asked my question as a post mortem for the problem we > observed on Tuesday. The script was originally created so > that users could do a Samba mount of the directory and pull > the file when they received an email stating it was available. OK. > Now we've moved into a scenario where I move the files to > other servers for the users and other processes to pick up. > As a result I've killed the process that monitored the directory > and moved the EOL conversion into the script that moves the files > to the NT severs. > > code flow is now: > opendir(DIR, $incoming_dir); > @files = grep { /spec of interest/ } readdir(DIR); > foreach $file (@files) { > # non portable test for open file condition, you're warned > @openfiles = `/usr/sbin/lsof +d $incoming_dir 2>/dev/null`; > if ( ! ( grep (/$file/) @openfiles ) ) { > proceed with munging and copying and backing up and all; > } else { > twiddle bits for a bit and try again > } > } > > Since the files are dropped into $incoming_dir by the tranfer process by > the business partner and aren't touched by anything/one but my process > I'm confident (let me know if I'm wrong here) that If I catch it on the > readdir and if it's not open when I test for it then it has arrived on > site completely and I'm free to do my stuff. The only race condition I > see is that the file may actually close between the time I test for it > being open and when I want to act on it, but that's OK since I'll return > later. > > Lesson: Don't throw together a quickie while users are in testing mode > and leave it there. The only question I have is whether the file is listed as open (i.e. a file descriptor is allocated to some process for it) during the entire creation process. I cannot see why this would not be the case, but I think it would be possible that scp could create the file, close it, reopen it and then add the contents. Why someone would code it this way, I can't imagine, but then I can't imagine some of the stuff that is apparent in Windows either. I suspect, as you note, that the only race condition left is not fatal. If a file is open when you check, you'll just get it on the next round and no harm done. Be warned that lsof is not always installed by default. I know that in many versions of Red Hat (6.0 through 7.x I believe, not sure about >8), it was not. I don't think SuSE installed it either, but I cannot remember. I use it often to track down processes that leak sockets and/or file descriptors. > On Thu, Aug 07, 2003 at 05:08:40PM -0700, Kyle Hayes wrote: > > Change the the program so that you wait until the file is at least 60 > > seconds old (if the longest file takes 40 seconds, give yourself some > > fudge factor). > > Then on a day of network congestion slowing things down or when the market > grows crazy and the file size doubles I'm back where I started. Hence the > defer action as long as possible approach I eventually took. Right. This was my point about this not being a real solution. It helps the problem, but doesn't cure it. As long as the file is open without any interruption from the initial file creation through the final filling in of the data, your method effectively uses a lock (the existence of an open fd to the file). This should be quite safe with the above caveat. > > Also note that if you can stat the file, it is probably there, so the -f > > may be redundant. > > That was polite. :) Yes, if I can stat it, it will pass the existance test. I generally reverse the tests. It is possible to see if a file exists, but have stat fail I believe. You'd need weird permissions to make this happen, but I think it is possible. I know on NT it is possible to have a file visible in a directory, but you cannot open it or find its size (i.e. stat it). > > finish and show up in step 3 above. I've seen up to five second delays > > on heavily loaded Linux systems running ext2 filesystems. Ext3 and > > Reiser running in journalling mode could actually have this problem worse > > than ext2. The WinNT filesystem can get really weird this way. On a > > heavily loaded system, I timed a file taking more than 30 seconds to show > > up in a directory after a copy operation said it was complete. > > Amazing. I had no idea. That's very important. Yes, people don't often understand that non-atomic operations have a way of becoming very visible at the wrong times :-( I've had to write some pretty twisted code just to handle log file rotation, processing and archiving due to things like this. It gets really annoying when you realize that you've managed to copy all but the last three lines of a set of log files, for six months. I wrote some Perl code to do replication for MySQL (before native replication was complete) and this kind of problem was really painful. Obviously, if you replicate all but the last few lines of SQL, you didn't really make the slave the same as the master. This took us weeks to track down. Luckily we had no failovers before we fixed it! If something is really important, I use a sentinel at the end of the file or I use file locks. Sentinels are nice because you can just look for them without worrying if the file is closed or open still. > > > ################## start of 2dos #################### > > > #!/usr/bin/perl -i > > > [snip] > > > ############# end of 2dos ############################# > > > > Erm, where does the output go? Are these programs sanitized to protect > > the innocent? > > You missed the -i on the #! line, an in place edit with no backup. D'oh! Yep, I did. I generally write fairly large Perl programs and use warning and tainting flags, so I didn't even think to look there. Sigh. Best, Kyle From jouke at pvoice.org Mon Aug 11 13:21:29 2003 From: jouke at pvoice.org (Jouke Visser) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] [OT] Need some help - No Perl Content... Message-ID: <3F37DEA9.1090808@pvoice.org> Hi, As you may or may not have read in my use.perl journal, I recently received a travel-grant from The Perl Foundation to spread the word on the work I do on pVoice (http://www.pvoice.org), my free software for disabled children, written in Perl. The idea is to come to Portland next summer to speak at OSCON about pVoice, and while I'm in the area, to get as many speaking engagements as I can to talk about pVoice to people working with disabled children and may need something like pVoice (Speech Therapists, Physical Therapists, other professionals at Rehabilitation Centers, Children's Hospitals, but also parents of those children). From what I can tell, the use of Open Source software as an aid for Disabled people is something that's completely unknown to the people who make decisions about it. Lots of people need devices and/or software and can't get their insurance to pay for it. With Open Source software it becomes a lot easier to use the software you need. I've already tried to contact the Child Development and Rehabilitation Center at OHSU (whose mailserver has been unavailable in the past week as it seems), and the Doernbecher institute (also a part of OHSU), who haven't responded (yet). My question to you is, do you happen to know people working in this area, and if you do, could you help me getting in touch with them to try and see if we can arrange a meeting next year to get the most out of the grant I received? Thanks, Jouke Visser http://www.pvoice.org From jkeroes at eli.net Tue Aug 12 15:07:48 2003 From: jkeroes at eli.net (Joshua Keroes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin In-Reply-To: <20030806150939.GH19937@eli.net> References: <20030806150939.GH19937@eli.net> Message-ID: <20030812200748.GA5155@eli.net> The next Portland Perl Mongers meeting is upon us: Jeff Griffin of Powells City of Books Tomorrow, Wednesday, 13 August 2003, 6:30-? This is a man who munges some serious data for a living. Have you ever wondered where all the book listings, the pictures, the everything that www.powells.com displays come from? Here's a hint: everywhere. Jeff scrapes the web and funnels various data formats (XML, CSV, Excel etc.) and images into MySQL for product pages. Next Weds he'll tell us how he does it. Here's a hint: Perl. Thanks to Thomas Keller and his department's monetary donation, we have a shiny new location this month, a lab with Macs, PCs, and wireless-connectivity for the laptop-enabled among us: Room #122 (downstairs) Biomedical Info Communication Center (the BICC) Oregon Health and Sciences University (OHSU) Snacks and drink will be provided, paid for by t-shirt funds. Location notes: * The BICC is building 17 on the campus map. [1] * Free parking in lot 83 just up the hill and across the street, behind the OHSU Auditorium. * Follow the directions on OHSU's directions page [2] J [1] http://www.ohsu.edu/about/campusmap.html [2] http://www.ohsu.edu/about/directions.shtml From merlyn at stonehenge.com Tue Aug 12 15:32:07 2003 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin In-Reply-To: <20030812200748.GA5155@eli.net> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> Message-ID: <86smo6fwhr.fsf@blue.stonehenge.com> >>>>> "Joshua" == Joshua Keroes writes: Joshua> The next Portland Perl Mongers meeting is upon us: I can't attend, because it's right across from the Wil Wheaton book signing at Powell's in Beaverton... but if you end up at a pub nearby, please call me (UA has the number) to let me know so I can come by. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! From mikeraz at patch.com Tue Aug 12 15:43:29 2003 From: mikeraz at patch.com (Michael Rasmussen) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin In-Reply-To: <86smo6fwhr.fsf@blue.stonehenge.com> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> <86smo6fwhr.fsf@blue.stonehenge.com> Message-ID: <20030812204329.GA25175@patch.com> The OHSU Marquam Hill Campus is in Beaverton? And to think I was going to trot us the hill from work in downtown. Randal? On Tue, Aug 12, 2003 at 01:32:00PM -0700, Randal L. Schwartz wrote: > I can't attend, because it's right across from the Wil Wheaton book > signing at Powell's in Beaverton -- Michael Rasmussen aka mikeraz Be appropriate && Follow your curiosity http://www.patch.com/ http://wiki.patch.com/ http://wiki.patch.com/index.php/BicycleCommuting The fortune cookie says: Small animal kamikaze attack on power supplies From merlyn at stonehenge.com Tue Aug 12 15:44:38 2003 From: merlyn at stonehenge.com (Randal L. Schwartz) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin In-Reply-To: <20030812204329.GA25175@patch.com> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> <86smo6fwhr.fsf@blue.stonehenge.com> <20030812204329.GA25175@patch.com> Message-ID: <86isp2fvwx.fsf@blue.stonehenge.com> >>>>> "Michael" == Michael Rasmussen writes: Michael> The OHSU Marquam Hill Campus is in Beaverton? And to think I was going to trot us Michael> the hill from work in downtown. Randal? "right across" = timewise. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! From jkeroes at eli.net Tue Aug 12 16:14:19 2003 From: jkeroes at eli.net (Joshua Keroes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin In-Reply-To: <20030812204329.GA25175@patch.com> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> <86smo6fwhr.fsf@blue.stonehenge.com> <20030812204329.GA25175@patch.com> Message-ID: <20030812211418.GD5155@eli.net> No. Do not let the Randal confuse you. The meeting is at OHSU central. Pill Hill. Markham campus. On top of the West Hills. Just SW of downtown Portland. In Portland. Not Beaverton. See you soon, J On (Tue, Aug 12 13:43), Michael Rasmussen wrote: > The OHSU Marquam Hill Campus is in Beaverton? And to think I was going > to trot us the hill from work in downtown. Randal? > > On Tue, Aug 12, 2003 at 01:32:00PM -0700, Randal L. Schwartz wrote: > > I can't attend, because it's right across from the Wil Wheaton book > > signing at Powell's in Beaverton From dpool at hevanet.com Tue Aug 12 19:26:47 2003 From: dpool at hevanet.com (david pool) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> <86smo6fwhr.fsf@blue.stonehenge.com> <20030812204329.GA25175@patch.com> <20030812211418.GD5155@eli.net> Message-ID: <3F3985C7.4040702@hevanet.com> Joshua Keroes wrote: > No. Do not let the Randal confuse you. The meeting is at OHSU central. > Pill Hill. Markham campus. On top of the West Hills. Just SW of downtown > Portland. In Portland. Not Beaverton. Ah, now I get it. It's off of terwilliger then... d From btp at rentrak.com Tue Aug 12 15:52:33 2003 From: btp at rentrak.com (Ben Prew) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting tomorrow - Jeff Griffin In-Reply-To: <20030812200748.GA5155@eli.net> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> Message-ID: <20030812205233.GA12586@rentrak.com> On Tue, Aug 12, 2003 at 01:07:48PM -0700, Joshua Keroes wrote: > The next Portland Perl Mongers meeting is upon us: > > Jeff Griffin of Powells City of Books > Tomorrow, Wednesday, 13 August 2003, 6:30-? I'm assuming this is Powell's technical books? -- Ben Prew btp@rentrak.com From jkeroes at eli.net Wed Aug 13 14:21:53 2003 From: jkeroes at eli.net (Joshua Keroes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting *tonight* - Jeff Griffin In-Reply-To: <20030812200748.GA5155@eli.net> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> Message-ID: <20030813192153.GD19937@eli.net> Bonus: we may have a surpise followup lecture. Come and find out! -J Jeff Griffin of Powells City of Books Tonight, Wednesday, 13 August 2003, 6:30-? This is a man who munges some serious data for a living. Have you ever wondered where all the book listings, the pictures, the everything that www.powells.com displays come from? Here's a hint: everywhere. Jeff scrapes the web and funnels various data formats (XML, CSV, Excel etc.) and images into MySQL for product pages. Next Weds he'll tell us how he does it. Here's a hint: Perl. Thanks to Thomas Keller and his department's monetary donation, we have a shiny new location this month, a lab with Macs, PCs, and wireless-connectivity for the laptop-enabled among us: Room #122 (downstairs) Biomedical Info Communication Center (the BICC) Oregon Health and Sciences University (OHSU) Snacks and drink will be provided, paid for by t-shirt funds. Location notes: * The BICC is building 17 on the campus map. [1] * Free parking in lot 83 just up the hill and across the street, behind the OHSU Auditorium. * Follow the directions on OHSU's directions page [2] J [1] http://www.ohsu.edu/about/campusmap.html [2] http://www.ohsu.edu/about/directions.shtml From schwern at pobox.com Wed Aug 13 15:32:50 2003 From: schwern at pobox.com (Michael G Schwern) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting *tonight* - Jeff Griffin In-Reply-To: <20030813192153.GD19937@eli.net> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> <20030813192153.GD19937@eli.net> Message-ID: <20030813203250.GJ22658@windhund.schwern.org> On Wed, Aug 13, 2003 at 12:21:53PM -0700, Joshua Keroes wrote: > Bonus: we may have a surpise followup lecture. Come and find out! Exciting! Who is it? -- Michael G Schwern schwern@pobox.com http://www.pobox.com/~schwern/ Here's hoping you don't become a robot! From jkeroes at eli.net Wed Aug 13 16:23:20 2003 From: jkeroes at eli.net (Joshua Keroes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Aug meeting *tonight* - Jeff Griffin In-Reply-To: <20030813203250.GJ22658@windhund.schwern.org> References: <20030806150939.GH19937@eli.net> <20030812200748.GA5155@eli.net> <20030813192153.GD19937@eli.net> <20030813203250.GJ22658@windhund.schwern.org> Message-ID: <20030813212319.GE19937@eli.net> On (Wed, Aug 13 13:32), Michael G Schwern wrote: > On Wed, Aug 13, 2003 at 12:21:53PM -0700, Joshua Keroes wrote: > > Bonus: we may have a surpise followup lecture. Come and find out! > > Exciting! Who is it? Surprise! It's you, Schwern! :-D -J From dpool at hevanet.com Mon Aug 25 17:07:12 2003 From: dpool at hevanet.com (david pool) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Probably redundant Plug thread Message-ID: <3F4A8890.40305@hevanet.com> Cliff Wells wrote: >>Sounds like Cliff is voting with Carla and me on this one. > > > Absolutely. The more likely scenario, IMO, is that the reduced IT > spending on software would increase the amount available to hire staff. > > I'll concede that switching to OSS might impact jobs in Washington. > However, we are concerned about jobs in *Oregon* and I suggest using > some of the present IT budget to hire *Oregonians* rather than mailing > the check off to Redmond. Well, with the legislative session winding down, any House hearing would have to be within the next day or two. Would you be willing to come testify? I mean Cliff especially, but who else would be available to go cram a hearing in an hour's notice? Of course, if you'd all like to give it a another try now, well, go e-mail your State Reps and Senators. Have your friends do it. What the hell. david PS If you have still yet to add your representatives to your e-mail address book, go look it up and then add it. :-D http://www.leg.state.or.us/findlegsltr/findset.htm From jkeroes at eli.net Wed Aug 27 18:54:32 2003 From: jkeroes at eli.net (Joshua Keroes) Date: Mon Aug 2 21:34:24 2004 Subject: [Pdx-pm] Speakers Wanted Message-ID: <20030827235431.GB27278@eli.net> We have nobody to speak at the next meeting. Would you like to? Do you know someone that would? Drop me a line, Joshua