From fulko.hew at gmail.com Wed May 2 08:01:09 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Wed, 2 May 2012 11:01:09 -0400 Subject: [tpm] XML::Simple ignores SupressEmpty Message-ID: (Note that I've also posted this to PerlMonks.) I've been using XML::Simple for some very simple XML and until yesterday it was working the way I expected... till I got bitten with the following example: use XML::Simple; use Data::Dumper; my $x =< a a b EOF $x = XMLin( $x, #SuppressEmpty => '', ForceArray => 1, ContentKey => '-content', ); print Dumper(\$x); $VAR1 = \{ 'obj' => [ { 'class' => 'myclass', 'set' => { 'key2' => {}, 'key1' => { 'content' => 'a' } } }, { 'class' => 'myclass', 'set' => { 'key2' => 'b', 'key1' => 'a' } } ], 'name' => 'me' }; Here I've provided two 'objects', one object has an element that has an empty or value is blank. As a result the hash that's returned is formatted differently than when _all_ attributes are provided. What I expected was the latter like: 'key2' => 'b', 'key1' => 'a' but where there is an 'empty' attribute everything changes to a hash (and with the 'content' sub-element). That's what I thought the ForceArray and ContentKey was supposed to suppress; and the ContentKey does do it!... but only when all fields are populated. I thought SuppressEmpty would do it, but it doesn't. What am I missing? TIA Fulko -------------- next part -------------- An HTML attachment was scrubbed... URL: From fulko.hew at gmail.com Fri May 4 10:45:12 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Fri, 4 May 2012 13:45:12 -0400 Subject: [tpm] more problems than solutions this week Message-ID: Here's the next problem... How can you tell if a file pointer has been rewound? The task: ...Follow the contents of a file (and process any new data) The problem: While testing the various permutations of how a file (log files) can get updated you have typical actions like: - append new data onto the end (new stuff) - delete the file and start over (after log file rotation) So one of my test modes was to use shell redirection to write to a file. ie. echo "stuff" > file What I thought/expected would happen is that shell would create a new copy of the file and write the string into it. [So when my app detects that the inode changed, it would start reading the new file from the beginning.] But what I've found is that the inode doesn't change! [so I'd assume that the ">" redirection simply rewinds the write pointer to offset zero and writes the string and then truncates the file at that point.] So how can I detect that? One solution I considered was: to remember what the size of the file was (the last time I looked) and if the new size is smaller... it must have been truncated using this technique. That might work for the majority of occurrances, but it doesn't work with the simplest test. echo `date` > file results in a 28 octet long file so I'll remember that it was 28 bytes long for later ... echo `date` > file the inode hasn't changed and neither has the file size, yet there was 'new stuff that needs processing, that I won't see. ... Suggestions? (Or do I just throw up my hands in disgust and say I can't/don't handle that condition?) Fulko -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexmac131 at hotmail.com Fri May 4 10:47:03 2012 From: alexmac131 at hotmail.com (Alex Mackinnon) Date: Fri, 4 May 2012 17:47:03 +0000 Subject: [tpm] more problems than solutions this week In-Reply-To: References: Message-ID: Have you considered log4perl? Alex From: fulko.hew at gmail.com Date: Fri, 4 May 2012 13:45:12 -0400 To: toronto-pm at pm.org Subject: [tpm] more problems than solutions this week Here's the next problem... How can you tell if a file pointer has been rewound? The task: ...Follow the contents of a file (and process any new data) The problem: While testing the various permutations of how a file (log files) can get updated you have typical actions like: - append new data onto the end (new stuff) - delete the file and start over (after log file rotation) So one of my test modes was to use shell redirection to write to a file. ie. echo "stuff" > file What I thought/expected would happen is that shell would create a new copy of the file and write the string into it. [So when my app detects that the inode changed, it would start reading the new file from the beginning.] But what I've found is that the inode doesn't change! [so I'd assume that the ">" redirection simply rewinds the write pointer to offset zero and writes the string and then truncates the file at that point.] So how can I detect that? One solution I considered was: to remember what the size of the file was (the last time I looked) and if the new size is smaller... it must have been truncated using this technique. That might work for the majority of occurrances, but it doesn't work with the simplest test. echo `date` > file results in a 28 octet long file so I'll remember that it was 28 bytes long for later ... echo `date` > file the inode hasn't changed and neither has the file size, yet there was 'new stuff that needs processing, that I won't see. ... Suggestions? (Or do I just throw up my hands in disgust and say I can't/don't handle that condition?) Fulko _______________________________________________ toronto-pm mailing list toronto-pm at pm.org http://mail.pm.org/mailman/listinfo/toronto-pm -------------- next part -------------- An HTML attachment was scrubbed... URL: From fulko.hew at gmail.com Fri May 4 10:51:48 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Fri, 4 May 2012 13:51:48 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: References: Message-ID: On Fri, May 4, 2012 at 1:47 PM, Alex Mackinnon wrote: > Have you considered log4perl? > log4perl is about writing log files, not reading them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From indy at indigostar.com Fri May 4 10:56:59 2012 From: indy at indigostar.com (Indy Singh) Date: Fri, 4 May 2012 13:56:59 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: References: Message-ID: Wouldn?t the timestamp of the file have changed? For this case you could also do a checksum or crc of the file. If the file is huge it may be enough to crc just the first few kilobytes of the file. Indy Singh IndigoSTAR Software -- www.indigostar.com From: Fulko Hew Sent: Friday, May 04, 2012 1:45 PM To: TPM Subject: [tpm] more problems than solutions this week Here's the next problem... How can you tell if a file pointer has been rewound? The task: ...Follow the contents of a file (and process any new data) The problem: While testing the various permutations of how a file (log files) can get updated you have typical actions like: - append new data onto the end (new stuff) - delete the file and start over (after log file rotation) So one of my test modes was to use shell redirection to write to a file. ie. echo "stuff" > file What I thought/expected would happen is that shell would create a new copy of the file and write the string into it. [So when my app detects that the inode changed, it would start reading the new file from the beginning.] But what I've found is that the inode doesn't change! [so I'd assume that the ">" redirection simply rewinds the write pointer to offset zero and writes the string and then truncates the file at that point.] So how can I detect that? One solution I considered was: to remember what the size of the file was (the last time I looked) and if the new size is smaller... it must have been truncated using this technique. That might work for the majority of occurrances, but it doesn't work with the simplest test. echo `date` > file results in a 28 octet long file so I'll remember that it was 28 bytes long for later ... echo `date` > file the inode hasn't changed and neither has the file size, yet there was 'new stuff that needs processing, that I won't see. ... Suggestions? (Or do I just throw up my hands in disgust and say I can't/don't handle that condition?) Fulko -------------------------------------------------------------------------------- _______________________________________________ toronto-pm mailing list toronto-pm at pm.org http://mail.pm.org/mailman/listinfo/toronto-pm -------------- next part -------------- An HTML attachment was scrubbed... URL: From fulko.hew at gmail.com Fri May 4 11:04:47 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Fri, 4 May 2012 14:04:47 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: References: Message-ID: On Fri, May 4, 2012 at 1:56 PM, Indy Singh wrote: > Wouldn?t the timestamp of the file have changed? > Yes of course... I should have thought of that! "I can feel my mind slipping..." - HAL > For this case you could also do a checksum or crc of the file. If the > file is huge it may be enough to crc just the first few kilobytes of the > file. > No, its not practical, either the file is too big, or there are other ways people could mess with the file that I haven't thought of yet if it was just the first few K. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arocker at Vex.Net Fri May 4 11:44:18 2012 From: arocker at Vex.Net (arocker at Vex.Net) Date: Fri, 4 May 2012 14:44:18 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: References: Message-ID: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> I see Indy's answered your problem, but just a point on the topic: > But what I've found is that the inode doesn't change! > [so I'd assume that the ">" redirection simply rewinds > the write pointer to offset zero and writes the string > and then truncates the file at that point.] > That's why "sort file > file" (and similar commands) is fatal; it resets the pointer on the (output) file, then starts trying to read from the (input) file, whose pointer now says it's at EOF. From fulko.hew at gmail.com Fri May 4 12:00:43 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Fri, 4 May 2012 15:00:43 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> Message-ID: On Fri, May 4, 2012 at 2:44 PM, wrote: > > I see Indy's answered your problem, but just a point on the topic: > > > But what I've found is that the inode doesn't change! > > [so I'd assume that the ">" redirection simply rewinds > > the write pointer to offset zero and writes the string > > and then truncates the file at that point.] > > > > That's why "sort file > file" (and similar commands) is fatal; it resets > the pointer on the (output) file, then starts trying to read from the > (input) file, whose pointer now says it's at EOF. > And here I always thought it was because it opens the output file first clobbering the original data before it gets a chance to open the original data. It just goes to show you, you DO learn something new every few decades. :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From indy at indigostar.com Fri May 4 12:14:15 2012 From: indy at indigostar.com (Indy Singh) Date: Fri, 4 May 2012 15:14:15 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> Message-ID: <238D314A8F624073B7A50F23D4F96FD9@indy> While we are on this topic. One issue I have encountered is that of how to update a 'live' file on a webserver. Lets say that we have a large file on a webserver 'foo.tar' and I want to replace it with a new version. If I SFTP upload a replacement file then I am never sure if I am going to mess up someone that may be half way trough downloading the file. I just verified that the inode number does not change if I upload a replacement file with SFTP. Would I be correct in thinking that any file downloads that are occurring concurrently with the upload would be corrupted? Would it be safe to upload the new file as 'new.tar' then use the commands 'rm foo.tar' followed by 'mv new.tar foo.tar'? On the assumption that the rm command would only unlink the directory entry but any current file handles being used to download the file would continue to work. Any new downloads would use the new file. Indy Singh IndigoSTAR Software -- www.indigostar.com -----Original Message----- From: arocker at Vex.Net Sent: Friday, May 04, 2012 2:44 PM To: TPM Subject: Re: [tpm] more problems than solutions this week I see Indy's answered your problem, but just a point on the topic: > But what I've found is that the inode doesn't change! > [so I'd assume that the ">" redirection simply rewinds > the write pointer to offset zero and writes the string > and then truncates the file at that point.] > That's why "sort file > file" (and similar commands) is fatal; it resets the pointer on the (output) file, then starts trying to read from the (input) file, whose pointer now says it's at EOF. _______________________________________________ toronto-pm mailing list toronto-pm at pm.org http://mail.pm.org/mailman/listinfo/toronto-pm From fulko.hew at gmail.com Fri May 4 12:36:35 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Fri, 4 May 2012 15:36:35 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: <238D314A8F624073B7A50F23D4F96FD9@indy> References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> <238D314A8F624073B7A50F23D4F96FD9@indy> Message-ID: On Fri, May 4, 2012 at 3:14 PM, Indy Singh wrote: > While we are on this topic. One issue I have encountered is that of how > to update a 'live' file on a webserver. Lets say that we have a large file > on a webserver 'foo.tar' and I want to replace it with a new version. If I > SFTP upload a replacement file then I am never sure if I am going to mess > up someone that may be half way trough downloading the file. > > I just verified that the inode number does not change if I upload a > replacement file with SFTP. Would I be correct in thinking that any file > downloads that are occurring concurrently with the upload would be > corrupted? > I'd say, potentially YES. It now becomes a race condition as to whether the write pointer ever passes the read pointer. (Odds are that the local reading would complete before the remote rewrite can. But on NFS'ed or other remoted file systems... you mileage may vary.) Would it be safe to upload the new file as 'new.tar' then use the commands > 'rm foo.tar' followed by 'mv new.tar foo.tar'? On the assumption that the > rm command would only unlink the directory entry but any current file > handles being used to download the file would continue to work. Any new > downloads would use the new file. Yup. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liam at holoweb.net Fri May 4 13:28:26 2012 From: liam at holoweb.net (Liam R E Quin) Date: Fri, 04 May 2012 16:28:26 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> Message-ID: <1336163306.16952.36.camel@localhost.localdomain> On Fri, 2012-05-04 at 15:00 -0400, Fulko Hew wrote: > On Fri, May 4, 2012 at 2:44 PM, wrote: > And here I always thought it was because it opens the output > file first clobbering the original data before it gets a chance > to open the original data. Well, you're both right :-) When you have, sort file > file first, the shell processes the command, opens "file" for writing, and truncates it, and associates it with standard output for the subcommand. Then, the shell forks and exec's /bin/sort with standard output pointing to the now-empty file. When sort reads the file, there's nothing there. In V6 Unix, file pointers were global, and for example there was a goto command for the shell that changed the file pointer for the current shell script to point to just after the label, so the next line the shell read was the next line to execute. In V7 Unix (1978 or so) file pointers were per process and this was no longer possible. Liam -- Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/ Pictures from old books: http://fromoldbooks.org/ From legrady at gmail.com Fri May 4 14:41:03 2012 From: legrady at gmail.com (Tom Legrady) Date: Fri, 4 May 2012 17:41:03 -0400 Subject: [tpm] more problems than solutions this week In-Reply-To: <238D314A8F624073B7A50F23D4F96FD9@indy> References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> <238D314A8F624073B7A50F23D4F96FD9@indy> Message-ID: On Unix, if you "rm file1" or "mv file2 file1", file1 goes away, but if someone has it open, they maintain a link to an "invisible file". The file only REALLY goes away when they close their filehandle. On Windows, my understanding is that you can't delete files that are open, but i don't deal with Windows. Tom On Fri, May 4, 2012 at 3:14 PM, Indy Singh wrote: > While we are on this topic. One issue I have encountered is that of how > to update a 'live' file on a webserver. Lets say that we have a large file > on a webserver 'foo.tar' and I want to replace it with a new version. If I > SFTP upload a replacement file then I am never sure if I am going to mess > up someone that may be half way trough downloading the file. > > I just verified that the inode number does not change if I upload a > replacement file with SFTP. Would I be correct in thinking that any file > downloads that are occurring concurrently with the upload would be > corrupted? > > Would it be safe to upload the new file as 'new.tar' then use the commands > 'rm foo.tar' followed by 'mv new.tar foo.tar'? On the assumption that the > rm command would only unlink the directory entry but any current file > handles being used to download the file would continue to work. Any new > downloads would use the new file. > > > > Indy Singh > IndigoSTAR Software -- www.indigostar.com > -----Original Message----- From: arocker at Vex.Net > Sent: Friday, May 04, 2012 2:44 PM > To: TPM > Subject: Re: [tpm] more problems than solutions this week > > > > I see Indy's answered your problem, but just a point on the topic: > > But what I've found is that the inode doesn't change! >> [so I'd assume that the ">" redirection simply rewinds >> the write pointer to offset zero and writes the string >> and then truncates the file at that point.] >> >> > That's why "sort file > file" (and similar commands) is fatal; it resets > the pointer on the (output) file, then starts trying to read from the > (input) file, whose pointer now says it's at EOF. > > ______________________________**_________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/**listinfo/toronto-pm > ______________________________**_________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/**listinfo/toronto-pm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arocker at Vex.Net Mon May 7 12:50:44 2012 From: arocker at Vex.Net (arocker at Vex.Net) Date: Mon, 7 May 2012 15:50:44 -0400 Subject: [tpm] Misdirected effort Message-ID: An article in May's "Linux Journal" mentions a site called "Software Carpentry" http://software-carpentry.org/about/ninety-second-pitch/ which aims to teach scientists how to program. A noble aim, but they spoil it by picking Python as the language of instruction. How can we show them the error of their ways, apart from referencing the role of Perl in bioinformatics? From abram.hindle at softwareprocess.es Mon May 7 13:05:40 2012 From: abram.hindle at softwareprocess.es (Abram Hindle) Date: Mon, 07 May 2012 14:05:40 -0600 Subject: [tpm] Misdirected effort In-Reply-To: (sfid-20120507_135602_198679_0AE8475B) References: (sfid-20120507_135602_198679_0AE8475B) Message-ID: <4FA82B14.1070209@softwareprocess.es> I doubt you're going to make them change anything, but I bet that Greg Wilson would not be against volunteers porting examples to Perl. My point: you're not going to change anything unless you do the necessary volunteer work. abram On 12-05-07 01:50 PM, arocker at Vex.Net wrote: > An article in May's "Linux Journal" mentions a site called "Software > Carpentry" http://software-carpentry.org/about/ninety-second-pitch/ which > aims to teach scientists how to program. > > A noble aim, but they spoil it by picking Python as the language of > instruction. How can we show them the error of their ways, apart from > referencing the role of Perl in bioinformatics? > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm From ait at p2ee.org Mon May 7 13:09:16 2012 From: ait at p2ee.org (Alejandro Imass) Date: Mon, 7 May 2012 16:09:16 -0400 Subject: [tpm] Misdirected effort In-Reply-To: References: Message-ID: On Mon, May 7, 2012 at 3:50 PM, wrote: > > An article in May's "Linux Journal" mentions a site called "Software > Carpentry" http://software-carpentry.org/about/ninety-second-pitch/ which > aims to teach scientists how to program. > > A noble aim, but they spoil it by picking Python as the language of > instruction. How can we show them the error of their ways, apart from > referencing the role of Perl in bioinformatics? > Python and Java are languages designed to create software with mediocre programmers. Note that I'm not using mediocre in a pejorative but in the strict sense of the word. They were designed so that there be one way to do it and no creativity, to be able to make relatively good software with mediocre staff. They have become mainstream/industrial languages because they satisfy that particular need. They are no t bad languages at all, they're just boring and square. Perl and many other non-mainstream languages on the other hand are designed by hackers, linguists and higher-order programmers to make things interesting and fun, and more optimal in many cases. They offer more freedom and provide a much wider base of tools, but on the other hand require a lot more knowledge, and the freedom of TIMTOWTDI comes at a price. Perl is interesting however because it truly satisfies the needs of the very beginner, whilst at the same time offering things for higher-order programmers, and easy interfacing to C, which I personally find most valuable. With Perl you can program in several paradigms, at the same time! Functional, Simple OO, Complete OO (Moose), Procedural, Event-based, all at the same time. With Perl you can adapt your programming style/paradigm to tackle problems differently and I think this is most valuable for scientists. I mean, Perl will satisfy the easy-learning traits of Python, but will also satisfy the needs of complex problem-solving using different techniques and approaches, hardly ever found in a single language. Maybe some kind of argument in that sense may make sense for the scientific crowd, but there are surely a lot more, like the sheer amount of science related modules on the CPAN. Look in Perlmonks as there have been several discussions on promoting Perl. -- Alejandro Imass From legrady at gmail.com Mon May 7 17:40:18 2012 From: legrady at gmail.com (Tom Legrady) Date: Mon, 7 May 2012 20:40:18 -0400 Subject: [tpm] Misdirected effort In-Reply-To: References: Message-ID: I found the software carpentry site a couple of years ago .. read through the whole thing. I found it very interesting. The focus is on scientists who are presumed to be intelligent and have good math backgrounds, but never learned to write good software. I remember a discussion of the idea that data files should store their provenance, to use the art world term: store a history of where it originated and when, and of every transformation it has undergone, including version numbers for the software. When there was the scandal with the stolen emails from the UK university, there was a problem because data presented in papers could not be verified; either the original data was gone, or some or al of the programs used to process it were no longer usable. On Mon, May 7, 2012 at 3:50 PM, wrote: > > An article in May's "Linux Journal" mentions a site called "Software > Carpentry" http://software-carpentry.org/about/ninety-second-pitch/ which > aims to teach scientists how to program. > > A noble aim, but they spoil it by picking Python as the language of > instruction. How can we show them the error of their ways, apart from > referencing the role of Perl in bioinformatics? > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quantum.mechanic.1964 at gmail.com Tue May 8 06:47:07 2012 From: quantum.mechanic.1964 at gmail.com (Quantum Mechanic) Date: Tue, 8 May 2012 09:47:07 -0400 Subject: [tpm] How to shoot yourself with Google and Amazon Message-ID: http://www.behind-the-enemy-lines.com/2012/04/google-attack-how-i-self-attacked.html Cheers, Shaun ==> auoyd! ^w woJ3 +uaS <== From antoniosun at lavabit.com Tue May 8 08:21:29 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Tue, 8 May 2012 11:21:29 -0400 Subject: [tpm] Manipulating utf8 strings with Perl Message-ID: Hi, I want to do (one-line) replacement with Perl on utf8 strings. Here is the hex dump of what exactly the utf8 strings looks like: cat myfile.utf8 | od -t x1 | head -3 0000000 3c 00 73 00 3a 00 45 00 6e 00 76 00 65 00 6c 00 0000020 6f 00 70 00 65 00 20 00 78 00 6d 00 6c 00 6e 00 0000040 73 00 3a 00 73 00 3d 00 22 00 68 00 74 00 74 00 I.e., each character takes 2 bytes. I don't know how to strip the high byte (please help), but this is what the strings actually is: . . . I just want to do some string manipulations with one-line Perl, e.g.: cat myfile.utf8 | *perl -p000e ?s/**/**/'* * * *How can I do that? * * * *Thanks* * * -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at vereshagin.org Tue May 8 09:19:18 2012 From: peter at vereshagin.org (Peter Vereshagin) Date: Tue, 8 May 2012 20:19:18 +0400 Subject: [tpm] Manipulating utf8 strings with Perl In-Reply-To: References: Message-ID: <20120508161918.GB12053@external.screwed.box> Hello. 2012/05/08 11:21:29 -0400 Antonio Sun => To TPM Toronto Perl Mongers : AS> I.e., each character takes 2 bytes. In Perl, the 'character' can take both more and less AS> I don't know how to strip the high byte (please help), but this is what You may want to work with bytes or characters, either of those, but the choice must be made depending on your needs. -- Peter Vereshagin (http://vereshagin.org) pgp: A0E26627 From vinny at usestrict.net Tue May 8 12:27:38 2012 From: vinny at usestrict.net (Vinny Alves) Date: Tue, 8 May 2012 15:27:38 -0400 Subject: [tpm] Manipulating utf8 strings with Perl In-Reply-To: References: Message-ID: Do you absolutely need to work on the hex strings? If not, using binmode should make your oneliner read the input as UTF-8. perl -e 'binmode(STDIN,":encoding(UTF-8)"); while(<>){ *s/**/**/** *}' < myfile.utf8 Vinny http://cronblocks.com On Tue, May 8, 2012 at 11:21 AM, Antonio Sun wrote: > Hi, > > I want to do (one-line) replacement with Perl on utf8 strings. > > Here is the hex dump of what exactly the utf8 strings looks like: > > cat myfile.utf8 | od -t x1 | head -3 > 0000000 3c 00 73 00 3a 00 45 00 6e 00 76 00 65 00 6c 00 > 0000020 6f 00 70 00 65 00 20 00 78 00 6d 00 6c 00 6e 00 > 0000040 73 00 3a 00 73 00 3d 00 22 00 68 00 74 00 74 00 > > I.e., each character takes 2 bytes. > > I don't know how to strip the high byte (please help), but this is what > the strings actually is: > > > > . . . > > I just want to do some string manipulations with one-line Perl, e.g.: > > cat myfile.utf8 | *perl -p000e ?s/**/**/'* > * > * > *How can I do that? * > * > * > *Thanks* > * > * > > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liam at holoweb.net Tue May 8 12:36:42 2012 From: liam at holoweb.net (Liam R E Quin) Date: Tue, 08 May 2012 21:36:42 +0200 Subject: [tpm] Manipulating utf8 strings with Perl In-Reply-To: References: Message-ID: <1336505802.22486.59.camel@localhost.localdomain> On Tue, 2012-05-08 at 11:21 -0400, Antonio Sun wrote: > cat myfile.utf8 | od -t x1 | head -3 > 0000000 3c 00 73 00 3a 00 45 00 6e 00 76 00 65 00 6c 00 > 0000020 6f 00 70 00 65 00 20 00 78 00 6d 00 6c 00 6e 00 > 0000040 73 00 3a 00 73 00 3d 00 22 00 68 00 74 00 74 00 > > I.e., each character takes 2 bytes. That's not utf8 - utf8 doesn't have nul bytes in it unless you put them there. It's utf-16. Use iconv. Liam -- Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/ Pictures from old books: http://fromoldbooks.org/ Ankh: irc.sorcery.net irc.gnome.org www.advogato.org From antoniosun at lavabit.com Tue May 8 13:27:25 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Tue, 8 May 2012 16:27:25 -0400 Subject: [tpm] Manipulating utf8 strings with Perl In-Reply-To: References: Message-ID: Thanks everyone for your replies. Yeah, I realized that the file is utf-16 afterward. On Tue, May 8, 2012 at 3:27 PM, Vinny Alves wrote: > perl -e 'binmode(STDIN,":encoding(UTF-8)"); while(<>){ *s/**/* > .>*/** *}' < myfile.utf8 > Thanks, that's exactly what I was looking for. However, I tried the following and it doesn't work. $ perl -Mutf8 -pe 'binmode(STDIN,":encoding(UTF-16)"); s//' < myfile.utf8 > myfile.out $ cmp myfile.utf8 myfile.out && echo same same Seems that the replace strings ('s:Envelope...') are not handled as wide chars in Perl. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinny at usestrict.net Tue May 8 13:38:03 2012 From: vinny at usestrict.net (Vinny Alves) Date: Tue, 8 May 2012 16:38:03 -0400 Subject: [tpm] Manipulating utf8 strings with Perl In-Reply-To: References: Message-ID: Remove the -Mutf8 and it'll work with a UTF-8 file. I had some issues converting a test file to UTF-16. Vinny http://cronblocks.com On Tue, May 8, 2012 at 4:27 PM, Antonio Sun wrote: > Thanks everyone for your replies. > > Yeah, I realized that the file is utf-16 afterward. > > On Tue, May 8, 2012 at 3:27 PM, Vinny Alves wrote: > >> perl -e 'binmode(STDIN,":encoding(UTF-8)"); while(<>){ *s/**/*> >> .>*/** *}' < myfile.utf8 >> > > Thanks, that's exactly what I was looking for. > However, I tried the following and it doesn't work. > > $ perl -Mutf8 -pe 'binmode(STDIN,":encoding(UTF-16)"); > s//' < myfile.utf8 > myfile.out > > $ cmp myfile.utf8 myfile.out && echo same > same > > Seems that the replace strings ('s:Envelope...') are not handled as wide > chars in Perl. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoniosun at lavabit.com Tue May 8 14:56:19 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Tue, 8 May 2012 17:56:19 -0400 Subject: [tpm] Manipulating utf8 strings with Perl In-Reply-To: References: Message-ID: On Tue, May 8, 2012 at 4:38 PM, Vinny Alves wrote: > Remove the -Mutf8 and it'll work with a UTF-8 file. > I didn't have that -Mutf8 at first, only added after on seeing that it didn't work. I had some issues converting a test file to UTF-16. > Yeah, I'm facing the same problem as well. You can get a UTF-16 sample from my hex dump. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From fulko.hew at gmail.com Thu May 10 08:59:16 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Thu, 10 May 2012 11:59:16 -0400 Subject: [tpm] splitting filespecs Message-ID: I'm looking for a good library to split a filespec into directory and filename components; so I can: a) add a default dirspec if one wasn't provided b) add a default filename if one wasn't provided c) create the required directory path if it didn't already exist So far I've tried File::Basename and File::Spec Unfortunately they both mis-behave? File::Basename always provides a dirspec even if the original string didn't have one (so I can't tell if I need to insert my default one...) and File::Spec (as far as I'm concerned) incorrectly splits '/.' into the directory of '/.' with no file name. I would have expected the directory to be '/' and the filename to be '.'. Am I wrong? Is there a better library? [I suppose I could just take the filename from File::Spec and manually extract everything in front of it from the original string as the directory and simply _ignore_ what the module tells me.] ... and yes, I will eventually also need to parse DOS filespecs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuart at morungos.com Thu May 10 09:18:44 2012 From: stuart at morungos.com (Stuart Watt) Date: Thu, 10 May 2012 12:18:44 -0400 Subject: [tpm] splitting filespecs In-Reply-To: References: Message-ID: I've used File::Spec reliably for this, but it's right to claim that "." in "/." isn't a filename. It isn't really. I believe it is a special case used by UNIX to inform it that the thing specified is to be interpreted as a directory. So, for example, "/usr/x" could be a *file* named "x" (, but "/usr/x/." is specifically a directory. It's useful when it could be either and when it doesn't yet exist. This special case logic is built into File::Spec. You should *always* be able to split things with File::Spec->splitpath/splitdir and rejoin with File::Spec->catpath/catdir, and I've always found it works well for both UNIX and for Windows. All the best Stuart On 2012-05-10, at 11:59 AM, Fulko Hew wrote: > I'm looking for a good library to split a filespec into directory > and filename components; so I can: > > a) add a default dirspec if one wasn't provided > b) add a default filename if one wasn't provided > c) create the required directory path if it didn't already exist > > So far I've tried > > File::Basename and File::Spec > > Unfortunately they both mis-behave? > > File::Basename always provides a dirspec even if the original string didn't have one > (so I can't tell if I need to insert my default one...) > > and File::Spec (as far as I'm concerned) incorrectly splits '/.' into > the directory of '/.' with no file name. I would have expected the > directory to be '/' and the filename to be '.'. > > Am I wrong? > Is there a better library? > > [I suppose I could just take the filename from File::Spec and manually > extract everything in front of it from the original string as the directory > and simply _ignore_ what the module tells me.] > > ... and yes, I will eventually also need to parse DOS filespecs. > > > > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at cryptic.org Thu May 10 10:30:43 2012 From: rob at cryptic.org (rob at cryptic.org) Date: Thu, 10 May 2012 13:30:43 -0400 Subject: [tpm] splitting filespecs In-Reply-To: References: Message-ID: <4f5179f87dd182533444beb700cfaad9@cryptic.org> Further to that, "." actually is a special-case directory, similar to "..". Like a double period is "parent directory", a single period is "current directory". It's like this on both Unix and Windows (and presumably Mac). On Thu, 10 May 2012 12:18:44 -0400, Stuart Watt wrote: > I've used File::Spec reliably for this, but it's right to claim that > "." in "/." isn't a filename. It isn't really. I believe it is a > special case used by UNIX to inform it that the thing specified is to > be interpreted as a directory. So, for example, "/usr/x" could be a > *file* named "x" (, but "/usr/x/." is specifically a directory. It's > useful when it could be either and when it doesn't yet exist. This > special case logic is built into File::Spec. > > You should *always* be able to split things with > File::Spec->splitpath/splitdir and rejoin with > File::Spec->catpath/catdir, and I've always found it works well for > both UNIX and for Windows. > > All the best > Stuart > > On 2012-05-10, at 11:59 AM, Fulko Hew wrote: > >> I'm looking for a good library to split a filespec into directory >> and filename components; so I can: >> >> a) add a default dirspec if one wasn't provided >> b) add a default filename if one wasn't provided >> c) create the required directory path if it didn't already exist >> >> So far I've tried >> >> File::Basename and File::Spec >> >> Unfortunately they both mis-behave? >> >> File::Basename always provides a dirspec even if the original string >> didn't have one >> (so I can't tell if I need to insert my default one...) >> >> and File::Spec (as far as I'm concerned) incorrectly splits '/.' >> into >> the directory of '/.' with no file name. I would have expected the >> directory to be '/' and the filename to be '.'. >> >> Am I wrong? >> Is there a better library? >> >> [I suppose I could just take the filename from File::Spec and >> manually >> extract everything in front of it from the original string as the >> directory >> and simply _ignore_ what the module tells me.] >> >> ... and yes, I will eventually also need to parse DOS filespecs. >> >> _______________________________________________ >> toronto-pm mailing list >> toronto-pm at pm.org [1] >> http://mail.pm.org/mailman/listinfo/toronto-pm > > > > Links: > ------ > [1] mailto:toronto-pm at pm.org From mattp at cpan.org Thu May 10 10:39:03 2012 From: mattp at cpan.org (Matthew Phillips) Date: Thu, 10 May 2012 13:39:03 -0400 Subject: [tpm] ideas for talks In-Reply-To: <18336CEB-DC16-4E5F-A46B-E699AA3E030F@vilerichard.com> References: <18336CEB-DC16-4E5F-A46B-E699AA3E030F@vilerichard.com> Message-ID: Hi All, As an addendum to Olaf's metacpan talk I will try to put some slides together on Carton & CPANFile; why it's good, why you'd use it etc. Cheers, Matt On Wed, Mar 21, 2012 at 1:43 PM, Olaf Alders wrote: > I just wanted to throw out some ideas for talks over the coming months. > > First off, my YAPC talk has been accepted: > http://act.yapcna.org/2012/talk/139 It's a 20 minute introduction to the > MetaCPAN API. The meeting in May would be a great time for me to test > drive the talk. April works too, if May is booked solid. ;) > > Secondly, if anyone knows their way around tmux, I personally would > benefit from some kind of demonstration on that. (No, I did not RTFM) > > Thirdly, in general some kind of a tools night would be great. I'm > finding out about really great stuff just by looking over someone's > shoulder. A lot of times they tell me they've been using it for years, but > they never thought to share it with me. So, getting people to talk about > how they use vim, solarized, screen, tmux, komodo, etc and demonstrating > that on the projector would be something very helpful, I think. > > > Olaf > -- > Olaf Alders > olaf at vilerichard.com > > http://vilerichard.com -- folk rock > http://twitter.com/vilerichard > http://cdbaby.com/cd/vilerichard > > > > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fulko.hew at gmail.com Thu May 10 10:57:22 2012 From: fulko.hew at gmail.com (Fulko Hew) Date: Thu, 10 May 2012 13:57:22 -0400 Subject: [tpm] splitting filespecs In-Reply-To: <4f5179f87dd182533444beb700cfaad9@cryptic.org> References: <4f5179f87dd182533444beb700cfaad9@cryptic.org> Message-ID: On Thu, May 10, 2012 at 1:30 PM, wrote: > Further to that, "." actually is a special-case directory, similar to > "..". Like a double period is "parent directory", a single period is > "current directory". It's like this on both Unix and Windows (and > presumably Mac). Actually its both a special case filename and a directory. My trouble is that as far as File::Spec is concerned '.' and '..' are returned as the filename ... which is technically correct, BUT if thats all that was provided, it really means 'this' or 'the parent' directory [not a filename] As a result, I'd think the directory wasn't provided and substitute my own, and use the filename provided and end up with "/defdir/." ... which would be _completely_ wrong. If someone gave me '.' as their 'target' you'd want to interpret that as "they want it to be in the 'default' file in _this current_ directory" ie. ./deffile So in either case... these libraries still don't do it correctly (for my needs) I've ended up using a combo of special case tests for '.' and '..' [meaning this (or parent directory) with no filename provided]; otherwise use File::Basename to extract the filename and extract any stuff in front of the filename as the directory path (and ignore what File::Basename provides) and then finally apply my default path (if the prefix/dirpath was empty) and default filename (if filename was empty) and then splice them back together again using File::Spec::catfile() -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuart at morungos.com Thu May 10 11:28:33 2012 From: stuart at morungos.com (Stuart Watt) Date: Thu, 10 May 2012 14:28:33 -0400 Subject: [tpm] splitting filespecs In-Reply-To: References: <4f5179f87dd182533444beb700cfaad9@cryptic.org> Message-ID: OK, I see the issue, but you still can (should?) handle this with File::Spec. If you assume that these special cases (and there are only two) are always to be interpreted as a directory (I think that's really the problem), then something like the following will do it: my ($vol, $dir, $file) = File::Spec->splitpath($value, ($value eq File::Spec->updir() or $value eq File::Spec->curdir())) This will split forcing no file => 1 for . and .., and not for anything else. You could then maybe merge as before. This is also Windows-safe, and is actually logically correct for all platforms, as you are merely saying that if the string passed matches the exact string for the current directory or the parent directory, assume it is a directory. I used to use Lisp which had proper rules for merging pathnames that encapsulated some of this, but I just checked and as far as I can see, it also gets it wrong in these cases. I actually believe there is a case for a CPAN module which embeds this merging logic, i.e., takes two filenames and merges them, *not* a rel2abs, but a fill-in-the-blanks, which is very subtly different, and extremely handy for defaulting. The Lisp version also did file types, and this was also very handy. --S On 2012-05-10, at 1:57 PM, Fulko Hew wrote: > > > On Thu, May 10, 2012 at 1:30 PM, wrote: > Further to that, "." actually is a special-case directory, similar to "..". Like a double period is "parent directory", a single period is "current directory". It's like this on both Unix and Windows (and presumably Mac). > > Actually its both a special case filename and a directory. > My trouble is that as far as File::Spec is concerned '.' and '..' are returned as the filename > ... which is technically correct, BUT if thats all that was provided, > it really means 'this' or 'the parent' directory [not a filename] > > As a result, I'd think the directory wasn't provided and substitute my own, and use the > filename provided and end up with "/defdir/." > ... which would be _completely_ wrong. > > If someone gave me '.' as their 'target' you'd want to interpret that as > "they want it to be in the 'default' file in _this current_ directory" ie. ./deffile > > > So in either case... these libraries still don't do it correctly (for my needs) > > I've ended up using a combo of special case tests for '.' and '..' > [meaning this (or parent directory) with no filename provided]; > otherwise use File::Basename to extract the filename and > extract any stuff in front of the filename as the directory path > (and ignore what File::Basename provides) > and then finally apply my default path (if the prefix/dirpath was empty) > and default filename (if filename was empty) > and then splice them back together again using File::Spec::catfile() > > > > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoniosun at lavabit.com Sat May 12 12:16:08 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Sat, 12 May 2012 15:16:08 -0400 Subject: [tpm] Perl Template 101 Message-ID: Hi, It's been a while since I last played with Perl Templates. I know it is very easy, but I'm still wondering if you know some good intro/tutorial that can help me to pick it up easily, considering there are now zero knowledge remain in me about it. Also, when talking about template-ing, we can't avoid talking about the data sources. What would be the lightest module to use? Most probably I need only one column from the data source, in which case I don't need any package at all. But it'd be good to know, just in case that I need to read two or more columns. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ait at p2ee.org Sun May 13 08:48:40 2012 From: ait at p2ee.org (Alejandro Imass) Date: Sun, 13 May 2012 11:48:40 -0400 Subject: [tpm] Perl Template 101 In-Reply-To: References: Message-ID: On Sat, May 12, 2012 at 3:16 PM, Antonio Sun wrote> Hi, > > It's been a while since I last played with Perl Templates. I know it is very > easy, but I'm still wondering if you know some good intro/tutorial that can > help me to pick it up easily, considering there are now zero knowledge > remain in me about it. > > Also, when talking about template-ing, we can't avoid talking about the data > sources. What would be the lightest module to use? Most probably I need only TT2 is definitively the way to go for very powerful templates. The documentation is simply fantastic, and you probably won't need anything else but: perldoc Template::Manual perldoc Template::Manual::Intro http://template-toolkit.org/ Regarding DB access and combining a good ORM tech I would suggest using DBIC (DBIx::Class). But if you are going to use DBIC and TT2, you might as well add a controller layer and use Catalyst: http://www.catalystframework.org/ > one column from the data source, in which case I don't need any package at > all. But it'd be good to know, just in case that I need to read two or more > columns. > Catalyst helpers will help you create a CRUD app in about 7 minutes ;-) Best, -- Alejandro Imass From antoniosun at lavabit.com Mon May 14 09:06:40 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 14 May 2012 12:06:40 -0400 Subject: [tpm] Pipe expression result through external filter command Message-ID: Hi, My mind is blocked, and need your help. I need to use an external filter command to further process my Perl expression calculation, all within an Perl expression. How can I do that? Let me illustrate with an example, I need to calculate 12+34, then pipe this expression result through an external shell filter command, say 'rev'. ultimately, I'm going to use the whole expression in the second part of string replace 's/.../.../eg' operation. Please help. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoniosun at lavabit.com Mon May 14 10:29:47 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 14 May 2012 13:29:47 -0400 Subject: [tpm] Pipe expression result through external filter command In-Reply-To: References: Message-ID: On Mon, May 14, 2012 at 12:06 PM, Antonio Sun wrote: > I need to calculate 12+34, then pipe this expression result through an > external shell filter command, say 'rev'. > > ultimately, I'm going to use the whole expression in the second part of > string replace 's/.../.../eg' operation. > Maybe I have to do it in two steps: perl -e '$str = "1234"; $str =~ s/1234/$var = $&; `echo $var | rev`/e; print $str' Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From shlomif at shlomifish.org Mon May 14 12:13:06 2012 From: shlomif at shlomifish.org (Shlomi Fish) Date: Mon, 14 May 2012 22:13:06 +0300 Subject: [tpm] Pipe expression result through external filter command In-Reply-To: References: Message-ID: <20120514221306.5c00c706@lap.shlomifish.org> Hi Antonio, a few comments on your code. On Mon, 14 May 2012 13:29:47 -0400 Antonio Sun wrote: > On Mon, May 14, 2012 at 12:06 PM, Antonio Sun wrote: > > > I need to calculate 12+34, then pipe this expression result through an > > external shell filter command, say 'rev'. > > > > ultimately, I'm going to use the whole expression in the second part of > > string replace 's/.../.../eg' operation. > > > > Maybe I have to do it in two steps: > > perl -e '$str = "1234"; $str =~ s/1234/$var = $&; `echo $var | rev`/e; > print $str' > 1. Using $& will incur a slowdown on the rest of the matches done in the program. You should instead do: s/(1234)/$var = $1; [rest of expression here]/ 2. Look into https://metacpan.org/module/IPC::System::Simple and https://metacpan.org/release/IPC-Run instead of using backticks ( `...` ). If you are going to use backticks, make sure you quote the variables you pass this way (which varies from shell to shell). 3. I hope the external command in question is harder to do properly in Perl than the UNIX "rev" command. You may also wish to search CPAN for a module doing something similar. For more information, see: http://perl-begin.org/tutorials/bad-elements/ Regards, Shlomi Fish -- ----------------------------------------------------------------- Shlomi Fish http://www.shlomifish.org/ List of Portability Libraries - http://shlom.in/port-libs There is an IGLU Cabal, but its only purpose is to deny the existence of an IGLU Cabal. ? Martha Greenberg Please reply to list if it's a mailing list post - http://shlom.in/reply . From antoniosun at lavabit.com Mon May 14 13:42:20 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 14 May 2012 16:42:20 -0400 Subject: [tpm] Pipe expression result through external filter command In-Reply-To: <20120514221306.5c00c706@lap.shlomifish.org> References: <20120514221306.5c00c706@lap.shlomifish.org> Message-ID: Thanks a lot! On Mon, May 14, 2012 at 3:13 PM, Shlomi Fish wrote: > a few comments on your code. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf at vilerichard.com Tue May 15 08:36:32 2012 From: olaf at vilerichard.com (Olaf Alders) Date: Tue, 15 May 2012 11:36:32 -0400 Subject: [tpm] Perl Template 101 In-Reply-To: References: Message-ID: <1CC62361-2DBF-402A-B6C2-8AD780005F75@vilerichard.com> On 2012-05-13, at 11:48 AM, Alejandro Imass wrote: > On Sat, May 12, 2012 at 3:16 PM, Antonio Sun wrote> Hi, >> >> It's been a while since I last played with Perl Templates. I know it is very >> easy, but I'm still wondering if you know some good intro/tutorial that can >> help me to pick it up easily, considering there are now zero knowledge >> remain in me about it. >> >> Also, when talking about template-ing, we can't avoid talking about the data >> sources. What would be the lightest module to use? Most probably I need only > > TT2 is definitively the way to go for very powerful templates. The > documentation is simply fantastic, and you probably won't need > anything else but: > > perldoc Template::Manual > perldoc Template::Manual::Intro > http://template-toolkit.org/ I really do like TT as well. Just thought I'd also point out that Text::XSlate also seems to be a popular choice: https://metacpan.org/module/Text::Xslate That's likely the next templating system I'll play with. Olaf -- Olaf Alders olaf at vilerichard.com http://vilerichard.com -- folk rock http://twitter.com/vilerichard http://cdbaby.com/cd/vilerichard From dave.s.doyle at gmail.com Tue May 15 10:14:55 2012 From: dave.s.doyle at gmail.com (Dave Doyle) Date: Tue, 15 May 2012 13:14:55 -0400 Subject: [tpm] newest perlmonger Message-ID: Sorry for the list spam. My wife and I had a baby girl! 8 lbs 8 oz at 5:28pm (GMT -4) at Mount Sinai in Toronto. Kaylee Vivian Joy Rasheligh Doyle (little girl, big name) http://imgur.com/a/8SnpN There you go Jordan. Achievement Unlocked: Babyhood. D -- dave.s.doyle at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From uri at stemsystems.com Tue May 15 10:32:12 2012 From: uri at stemsystems.com (Uri Guttman) Date: Tue, 15 May 2012 13:32:12 -0400 Subject: [tpm] Perl Template 101 In-Reply-To: <1CC62361-2DBF-402A-B6C2-8AD780005F75@vilerichard.com> References: <1CC62361-2DBF-402A-B6C2-8AD780005F75@vilerichard.com> Message-ID: <4FB2931C.8050102@stemsystems.com> On 05/15/2012 11:36 AM, Olaf Alders wrote: > > On 2012-05-13, at 11:48 AM, Alejandro Imass wrote: > >> On Sat, May 12, 2012 at 3:16 PM, Antonio Sun wrote> Hi, >>> >>> It's been a while since I last played with Perl Templates. I know it is very >>> easy, but I'm still wondering if you know some good intro/tutorial that can >>> help me to pick it up easily, considering there are now zero knowledge >>> remain in me about it. >>> >>> Also, when talking about template-ing, we can't avoid talking about the data >>> sources. What would be the lightest module to use? Most probably I need only >> >> TT2 is definitively the way to go for very powerful templates. The >> documentation is simply fantastic, and you probably won't need >> anything else but: >> >> perldoc Template::Manual >> perldoc Template::Manual::Intro >> http://template-toolkit.org/ > > I really do like TT as well. Just thought I'd also point out that Text::XSlate also seems to be a popular choice: https://metacpan.org/module/Text::Xslate That's likely the next templating system I'll play with. well, i have to put in a plug for my Template::Simple. it is much easier to learn and use (only 4 markups) and it can do almost any templating job you want. also with the compiler option, it is the fastest templater around (much faster than TT). most templating doesn't need all the complexity of the larger templaters but they are popular. check this module out and you may be surprised at its effectiveness and yes, simplicity. thanx, uri From shlomif at shlomifish.org Tue May 15 11:06:02 2012 From: shlomif at shlomifish.org (Shlomi Fish) Date: Tue, 15 May 2012 21:06:02 +0300 Subject: [tpm] newest perlmonger In-Reply-To: References: Message-ID: <20120515210602.5d7ee1fb@lap.shlomifish.org> On Tue, 15 May 2012 13:14:55 -0400 Dave Doyle wrote: > Sorry for the list spam. My wife and I had a baby girl! > > 8 lbs 8 oz at 5:28pm (GMT -4) at Mount Sinai in Toronto. > > Kaylee Vivian Joy Rasheligh Doyle (little girl, big name) > > http://imgur.com/a/8SnpN > Mazal Tov! Regards, Shlomi Fish > There you go Jordan. Achievement Unlocked: Babyhood. > > D > > -- > dave.s.doyle at gmail.com -- ----------------------------------------------------------------- Shlomi Fish http://www.shlomifish.org/ My Public Domain Photos - http://www.flickr.com/photos/shlomif/ There is an IGLU Cabal, but its only purpose is to deny the existence of an IGLU Cabal. ? Martha Greenberg Please reply to list if it's a mailing list post - http://shlom.in/reply . From antoniosun at lavabit.com Tue May 15 11:35:08 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Tue, 15 May 2012 14:35:08 -0400 Subject: [tpm] Perl Template 101 In-Reply-To: <4FB2931C.8050102@stemsystems.com> References: <1CC62361-2DBF-402A-B6C2-8AD780005F75@vilerichard.com> <4FB2931C.8050102@stemsystems.com> Message-ID: On Tue, May 15, 2012 at 1:32 PM, Uri Guttman wrote: > On 05/15/2012 11:36 AM, Olaf Alders wrote: > >> >> .. . >> >> >> I really do like TT as well. Just thought I'd also point out that >> Text::XSlate also seems to be a popular choice: >> https://metacpan.org/module/**Text::Xslate That's likely the next templating system I'll play with. >> > Ha! Finally someone mentioned XSlate. It's Up to *50~100 times* faster than TT2, claimed to be the the fastest [template module] in CPAN. http://xslate.org/ Only one problem for me -- it is not in Debian repo. So I'd avoid it for now, because I tend to believe the collective agreement/selection from DDs. well, i have to put in a plug for my Template::Simple. it is much easier > to learn and use (only 4 markups) and it can do almost any templating job > you want. also with the compiler option, it is the fastest templater around > (much faster than TT). most templating doesn't need all the complexity of > the larger templaters but they are popular. check this module out and you > may be surprised at its effectiveness and yes, simplicity. Yes, that's actually what I chose. Thanks Uri. PS. Maybe your guys are accustomed to it, but it is so amazing to me, being able to be in a local PM group and you can talk to the actual module owners, and even meet them. Still feel so amazing to me. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From uri at stemsystems.com Tue May 15 15:30:51 2012 From: uri at stemsystems.com (Uri Guttman) Date: Tue, 15 May 2012 18:30:51 -0400 Subject: [tpm] Perl Template 101 Message-ID: <4FB2D91B.3080503@stemsystems.com> On 05/15/2012 02:35 PM, Antonio Sun wrote: > > > On Tue, May 15, 2012 at 1:32 PM, Uri Guttman > wrote: > > > well, i have to put in a plug for my Template::Simple. it is much > easier to learn and use (only 4 markups) and it can do almost any > templating job you want. also with the compiler option, it is the > fastest templater around (much faster than TT). most templating > doesn't need all the complexity of the larger templaters but they > are popular. check this module out and you may be surprised at its > effectiveness and yes, simplicity. > > Yes, that's actually what I chose. Thanks Uri. > > PS. Maybe your guys are accustomed to it, but it is so amazing to me, > being able to be in a local PM group and you can talk to the actual > module owners, and even meet them. Still feel so amazing to me. > using my modules is free. i charge very high fees to meet me!! :) actually i plan to be at yapc::na in madison. a beer will suffice to meet me. :) uri From antoniosun at lavabit.com Tue May 15 19:04:05 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Tue, 15 May 2012 22:04:05 -0400 Subject: [tpm] Perl Template 101 In-Reply-To: <4FB2D91B.3080503@stemsystems.com> References: <4FB2D91B.3080503@stemsystems.com> Message-ID: On Tue, May 15, 2012 at 6:30 PM, Uri Guttman wrote: > using my modules is free. i charge very high fees to meet me!! :) > > actually i plan to be at yapc::na in madison. a beer will suffice to > meet me. :) > , I've seen many Perl celebrities already since I've been with TPM. I believe I will get a chance to see you in person someday. Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From ait at p2ee.org Wed May 16 07:03:35 2012 From: ait at p2ee.org (Alejandro Imass) Date: Wed, 16 May 2012 10:03:35 -0400 Subject: [tpm] Perl Template 101 In-Reply-To: References: <1CC62361-2DBF-402A-B6C2-8AD780005F75@vilerichard.com> <4FB2931C.8050102@stemsystems.com> Message-ID: On Tue, May 15, 2012 at 2:35 PM, Antonio Sun wrote: > > > On Tue, May 15, 2012 at 1:32 PM, Uri Guttman wrote: >> >> On 05/15/2012 11:36 AM, Olaf Alders wrote: >>> >>> >>>> .. . >>> >>> >>> I really do like TT as well. ?Just thought I'd also point out that >>> Text::XSlate also seems to be a popular choice: >>> https://metacpan.org/module/Text::Xslate ?That's likely the next templating >>> system I'll play with. > > > Ha! Finally someone mentioned?XSlate. It's?Up to?50~100 times?faster than > TT2,?claimed?to be the the?fastest [template module] in CPAN. > http://xslate.org/ > > Only one problem for me -- it is not in Debian repo. So I'd avoid it for > now, because I tend to believe the collective agreement/selection from DDs. > The Debian Perl Policy is by no means an indicative of quality, stability or any other measurement of a Perl library. That's what the CPAN is for. -- Alejandro Imass From indy at indigostar.com Tue May 22 18:56:32 2012 From: indy at indigostar.com (Indy Singh) Date: Tue, 22 May 2012 21:56:32 -0400 Subject: [tpm] Change create date of a directory In-Reply-To: References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> <238D314A8F624073B7A50F23D4F96FD9@indy> Message-ID: <5395E7E9456B44299C1079DB8FAE7BC3@indy> Hi all, Is there a Perl way of changing the create date of a directory? utime only seems to allow changing the access and modification time. I may going about this the wrong way. Perhaps someone can comment. What I am trying to do (simplified example) is create a makefile that will copy file from an input directory to an output directory if any input file is newer than the output file. I am working on Windows 7, with microsoft nmake, and cygwin touch. Makefile #1 below works, but has the disadvantage that it requires 3 targets and a dummy file has to be left lying around. Makefile #2 uses the timestamp of the output directory as the dummy target. It almost works except that the nmake that I am using seems to look at the 'create' date of the directory rather than the 'modified' date. This is probably a bug in nmake, but I don't want to use something else otherwise it will start a chain reaction of more changes to existing makefile. MAKEFILE #2 outdir: indir\*.pl if not exist $@ mkdir $@ for %f in ($?) do copy %f outdir touch $@ MAKEFILE #1 outdir: mkdir $@ outdir\dummy: indir\*.pl for %f in ($?) do copy %f outdir touch outdir\dummy try: outdir indir\dummy Indy Singh IndigoSTAR Software -- www.indigostar.com From antoniosun at lavabit.com Wed May 23 07:01:43 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Wed, 23 May 2012 10:01:43 -0400 Subject: [tpm] Change create date of a directory In-Reply-To: <5395E7E9456B44299C1079DB8FAE7BC3@indy> References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net> <238D314A8F624073B7A50F23D4F96FD9@indy> <5395E7E9456B44299C1079DB8FAE7BC3@indy> Message-ID: On Tue, May 22, 2012 at 9:56 PM, Indy Singh wrote: > Makefile #2 uses the timestamp of the output directory as the dummy > target. It almost works except that the nmake that I am using seems to look > at the 'create' date of the directory rather than the 'modified' date. Ha, another "Made by MS" great product. I'm dealing with such great products everyday and am enjoying the pain. This is probably a bug in nmake, but I don't want to use something else > otherwise it will start a chain reaction of more changes to existing > makefile. I understand. You've been "Locked in" by MS. For me, I'd take this opportunity to break away from the brain-damaged tool, and start using native cygwin make, since you've already using it's touch. Changing the create time is hard enough, comparing to the make file porting, who knows what is waiting for you next. cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From indy at indigostar.com Wed May 23 07:14:44 2012 From: indy at indigostar.com (Indy Singh) Date: Wed, 23 May 2012 10:14:44 -0400 Subject: [tpm] Change create date of a directory In-Reply-To: References: <11fc642b3e14371cbf0f3df588a6760b.squirrel@mail.vex.net><238D314A8F624073B7A50F23D4F96FD9@indy><5395E7E9456B44299C1079DB8FAE7BC3@indy> Message-ID: <2CF65DA0097543FA866B48B79ECEFBB1@indy> From: Antonio Sun Sent: Wednesday, May 23, 2012 10:01 AM To: Indy Singh Cc: toronto-pm at pm.org Subject: Re: [tpm] Change create date of a directory On Tue, May 22, 2012 at 9:56 PM, Indy Singh wrote: Makefile #2 uses the timestamp of the output directory as the dummy target. It almost works except that the nmake that I am using seems to look at the 'create' date of the directory rather than the 'modified' date. Ha, another "Made by MS" great product. I'm dealing with such great products everyday and am enjoying the pain. This is probably a bug in nmake, but I don't want to use something else otherwise it will start a chain reaction of more changes to existing makefile. I understand. You've been "Locked in" by MS. For me, I'd take this opportunity to break away from the brain-damaged tool, and start using native cygwin make, since you've already using it's touch. Changing the create time is hard enough, comparing to the make file porting, who knows what is waiting for you next. cheers > start using native cygwin make > since you've already using it's touch Two very good points. Thanks for the suggestions. I have to maintaining *nix and Windows environments, this would also avoid the time wasted in constantly having to read two sets of documentation. Indy -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.alders at gmail.com Thu May 24 08:32:42 2012 From: olaf.alders at gmail.com (Olaf Alders) Date: Thu, 24 May 2012 11:32:42 -0400 Subject: [tpm] iCPAN 2.0.0 is now in the app store Message-ID: <56C33DB5-69CE-475D-AC56-64F2CC19ADC2@vilerichard.com> Hi Everyone, iCPAN (which I spoke about at tools night) is now *finally* downloadable as a Universal app. If you think it's helpful, please do give it a rating: http://itunes.apple.com/us/app/icpan/id377340561?mt=8 Thanks! Olaf -- Olaf Alders olaf at vilerichard.com http://vilerichard.com -- folk rock http://twitter.com/vilerichard http://cdbaby.com/cd/vilerichard From jztam at yahoo.com Thu May 24 13:16:38 2012 From: jztam at yahoo.com (J Z Tam) Date: Thu, 24 May 2012 13:16:38 -0700 (PDT) Subject: [tpm] Ideas for talks next week? In-Reply-To: <832023AB-C0C5-4EEE-8C31-16F8CDD89080@vilerichard.com> Message-ID: <1337890598.37637.YahooMailClassic@web125703.mail.ne1.yahoo.com> Mongeren, IIRC, Olaf wanted to testdrive his YAPC talk. Please confirm/deny/reschedule. And LMK how many minutes you are budgetting for. Who and what else did we want to schedule for next Thursday? Please and thanks. Dave.D, If you do not intend on attending, we __could__ just conference you In and you could give the newest mongerer a gentle introduction to the collective. /jordan From olaf at vilerichard.com Thu May 24 13:22:11 2012 From: olaf at vilerichard.com (Olaf Alders) Date: Thu, 24 May 2012 16:22:11 -0400 Subject: [tpm] Ideas for talks next week? In-Reply-To: <1337890598.37637.YahooMailClassic@web125703.mail.ne1.yahoo.com> References: <1337890598.37637.YahooMailClassic@web125703.mail.ne1.yahoo.com> Message-ID: Hi Jordan, On 2012-05-24, at 4:16 PM, J Z Tam wrote: > Mongeren, > IIRC, Olaf wanted to testdrive his YAPC talk. Please confirm/deny/reschedule. And LMK how many minutes you are budgetting for. Confirmed. 20 minutes -- which is the length of my YAPC talk. Maybe add some time for feedback and destructive criticism. Thanks! Olaf -- Olaf Alders olaf at vilerichard.com http://vilerichard.com -- folk rock http://twitter.com/vilerichard http://cdbaby.com/cd/vilerichard From antoniosun at lavabit.com Mon May 28 06:59:15 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 28 May 2012 09:59:15 -0400 Subject: [tpm] find, manipulate, then output Message-ID: Hi, I want to work on the strings that I find in the input, then output the processed content. I'm wondering what's the elegant way to do it. IIRC, it can be done with something like this perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' But I really can't work out the rest now. Please help. Here is an example that you can work on. Given the following input, I want to output, ", " on each line. I.e., the output would be: Franklin, Benjamin Melville, Herman Thanks The Autobiography of Benjamin Franklin Benjamin Franklin 8.99 The Confidence Man Herman Melville 11.99 . . . -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkeen at verizon.net Mon May 28 07:52:17 2012 From: jkeen at verizon.net (James E Keenan) Date: Mon, 28 May 2012 10:52:17 -0400 Subject: [tpm] find, manipulate, then output In-Reply-To: References: Message-ID: <4FC39121.70109@verizon.net> On 5/28/12 9:59 AM, Antonio Sun wrote: > Hi, > > I want to work on the strings that I find in the input, then output the > processed content. > I'm wondering what's the elegant way to do it. > > IIRC, it can be done with something like this > > perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' > > But I really can't work out the rest now. > Please help. > > Here is an example that you can work on. Given the following input, > I want to output, ", " on each line. > I.e., the output would be: > > Franklin, Benjamin > Melville, Herman > > Thanks > > > > The Autobiography of Benjamin Franklin > > Benjamin > > Franklin > It looks like you are trying to roll your own XML parser. Why? jimk From antoniosun at lavabit.com Mon May 28 08:13:00 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 28 May 2012 11:13:00 -0400 Subject: [tpm] find, manipulate, then output In-Reply-To: <4FC39121.70109@verizon.net> References: <4FC39121.70109@verizon.net> Message-ID: On Mon, May 28, 2012 at 10:52 AM, James E Keenan wrote: > On 5/28/12 9:59 AM, Antonio Sun wrote: > >> Hi, >> >> I want to work on the strings that I find in the input, then output the >> processed content. >> I'm wondering what's the elegant way to do it. >> >> IIRC, it can be done with something like this >> >> perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' >> >> But I really can't work out the rest now. >> Please help. >> >> Here is an example that you can work on. Given the following input, >> I want to output, ", " on each line. >> I.e., the output would be: >> >> Franklin, Benjamin >> Melville, Herman >> >> Thanks >> >> >> >> The Autobiography of Benjamin Franklin >> >> Benjamin >> >> Franklin >> >> > > It looks like you are trying to roll your own XML parser. Why? > Thank you James for you reply. I am fully aware of Perl's XML parser and XPATH handling, and I admit that it'll be much easier using the XPATH. However, my question was regarding working on the strings found and output the processed content, the Perl way. The enclosed XML was just an example. It can well be anything than XML. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at vereshagin.org Mon May 28 08:51:32 2012 From: peter at vereshagin.org (Peter Vereshagin) Date: Mon, 28 May 2012 19:51:32 +0400 Subject: [tpm] find, manipulate, then output In-Reply-To: References: <4FC39121.70109@verizon.net> Message-ID: <20120528155130.GB5669@external.screwed.box> Hello. 2012/05/28 11:13:00 -0400 Antonio Sun => To James E Keenan : AS> >> perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' AS> >> AS> >> But I really can't work out the rest now. AS> >> Please help. Sure. perl -Mstrict -wE 'my ( $blob, $fname, $lname ) = map {""} 0 .. 2; while ( my $str = <> ) { $blob .= $str; if ( $blob =~ m/<(first|last)-name[^>]*>([^>]*) $name ) = ( $1 => $2 ); $name =~ s/^\s+|\s+$//g; if ( $kind eq "first" ) { $fname = $name; } else { $lname = $name; } $blob = ""; } if ( $fname and $lname ) { print "$lname, $fname\n"; ( $fname => $lname ) = map {""} 0 .. 1; } }' AS> >> Here is an example that you can work on. Given the following input, AS> >> I want to output, ", " on each line. AS> >> I.e., the output would be: AS> >> AS> >> Franklin, Benjamin AS> >> Melville, Herman AS> >> AS> >> Thanks AS> >> AS> >> AS> >> AS> >> The Autobiography of Benjamin Franklin AS> >> AS> >> Benjamin AS> >> AS> >> Franklin AS> >> AS> >> AS> > AS> > It looks like you are trying to roll your own XML parser. Why? AS> > AS> AS> Thank you James for you reply. AS> AS> I am fully aware of Perl's XML parser and XPATH handling, and I admit that AS> it'll be much easier using the XPATH. However, my question was regarding AS> working on the strings found and output the processed content, the Perl AS> way. The enclosed XML was just an example. It can well be anything than AS> XML. Try Marpa::XS? For your particular XML I'd take SAX kind of, say, XML::LibXML or Expat::XS. AS> Thanks -- Peter Vereshagin (http://vereshagin.org) pgp: A0E26627 From antoniosun at lavabit.com Mon May 28 09:09:56 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 28 May 2012 12:09:56 -0400 Subject: [tpm] find, manipulate, then output In-Reply-To: <20120528155130.GB5669@external.screwed.box> References: <4FC39121.70109@verizon.net> <20120528155130.GB5669@external.screwed.box> Message-ID: On Mon, May 28, 2012 at 11:51 AM, Peter Vereshagin wrote: > AS> >> perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' > AS> >> > AS> >> But I really can't work out the rest now. > AS> >> Please help. > > Sure. > > perl -Mstrict -wE 'my ( $blob, $fname, $lname ) = map {""} 0 .. 2; while ( > my $str = <> ) { $blob .= $str; if ( $blob =~ > m/<(first|last)-name[^>]*>([^>]*) $name ) = ( $1 => $2 > ); $name =~ s/^\s+|\s+$//g; if ( $kind eq "first" ) { $fname = $name; } > else { $lname = $name; } $blob = ""; } if ( $fname and $lname ) { print > "$lname, $fname\n"; ( $fname => $lname ) = map {""} 0 .. 1; } }' > Thank you Peter for your reply. I think the reason that your script is far more complicated than what I thought is because that you are handling one line at a time. I believe that if we to slurp the whole file in and have '.' match new lines as well, it can be greatly simplified. Again, I don't have my "perl notes" at hands, so I can't prove it now. Thanks anyway. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jztam at yahoo.com Mon May 28 09:29:42 2012 From: jztam at yahoo.com (J Z Tam) Date: Mon, 28 May 2012 09:29:42 -0700 (PDT) Subject: [tpm] Fw: [pm_groups] yapc::na arrival dinner Message-ID: <1338222582.88356.YahooMailClassic@web125703.mail.ne1.yahoo.com> --- On Mon, 5/28/12, Uri Guttman wrote: > From: Uri Guttman > Subject: [pm_groups] yapc::na arrival dinner > To: pm_groups at pm.org > Received: Monday, May 28, 2012, 2:34 AM > hi pm leaders, > > please forward this to your local pm if you know of any of > your members who are going to yapc::na in madison WI. it is > sold out but we need to get some information to many of the > attendees. > > thanx, > > uri > > > hi to all yapc::na attendees, > > if you haven't signed up to the yapc mailing list, please do > so. it is the primary area to interact with other yapc > attendees (world wide). you can discuss events, best local > beers, sharing rides/rooms, rental cars, places to eat, > hackathon projects, games to play etc. you can sign up on > this page: > > ? ? http://mail.pm.org/mailman/listinfo/yapc > > one of the major social events of yapc::na has been the > arrival dinner. it will be on tuesday, june 11 from 7-10 pm > at moe's grill and tavern. if you are planning to attend > please signup on this wiki page. we need a quality head > count for them to arrange for enough food. we are getting > drink deals (1/2 price on margaritas so far and something > for beer is in the works) so read that page also for > updates. if you are on a strict budget because you are a > student or underemployed, we will have several scholarships. > contact me off list if you want one of them. > > ? ? http://act.yapcna.org/2012/wiki?node=Arrival%20Dinner > > hope to see you at yapc::na and the arrival dinner! > > thanx, > > uri > -- Request pm.org Technical Support via support at pm.org > > pm_groups mailing list > pm_groups at pm.org > http://mail.pm.org/mailman/listinfo/pm_groups > From peter at vereshagin.org Mon May 28 09:55:22 2012 From: peter at vereshagin.org (Peter Vereshagin) Date: Mon, 28 May 2012 20:55:22 +0400 Subject: [tpm] find, manipulate, then output In-Reply-To: References: <4FC39121.70109@verizon.net> <20120528155130.GB5669@external.screwed.box> Message-ID: <20120528165522.GC5669@external.screwed.box> Hello. 2012/05/28 12:09:56 -0400 Antonio Sun => To Peter Vereshagin : AS> On Mon, May 28, 2012 at 11:51 AM, Peter Vereshagin wrote: AS> AS> > AS> >> perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' AS> > AS> >> AS> > AS> >> But I really can't work out the rest now. AS> > AS> >> Please help. AS> > AS> > Sure. AS> > AS> > perl -Mstrict -wE 'my ( $blob, $fname, $lname ) = map {""} 0 .. 2; while ( AS> > my $str = <> ) { $blob .= $str; if ( $blob =~ AS> > m/<(first|last)-name[^>]*>([^>]*) $name ) = ( $1 => $2 AS> > ); $name =~ s/^\s+|\s+$//g; if ( $kind eq "first" ) { $fname = $name; } AS> > else { $lname = $name; } $blob = ""; } if ( $fname and $lname ) { print AS> > "$lname, $fname\n"; ( $fname => $lname ) = map {""} 0 .. 1; } }' AS> > AS> AS> Thank you Peter for your reply. AS> AS> I think the reason that your script is far more complicated than what I AS> thought is because that you are handling one line at a time. I believe that AS> if we to slurp the whole file in and have '.' match new lines as well, it why worry about '.' ? Did I miss a thing? AS> can be greatly simplified. ok I was counting on SAX rather than DOM in terms of XML parsers. At the least you didn't specify if the input is small enough for slurping, so memory is my concern, as always. And ... you needed a one-liner, right? As for me, one-liners have to be written quickly rather than just be simple. It's not always the same in Perl especially. AS> Again, I don't have my "perl notes" at hands, so I can't prove it now. Read File::Slurp and a perlre. AS> Thanks anyway. You're welcome. Here is handling the blob as a whole: perl -Mstrict -Mautodie -wE 'my ( $blob, $fname, $lname ) = map {""} 0 .. 2; $blob .= $_ while <>; while ( $blob =~ s/<(first|last)-name[^>]*>([^>]*) $name ) = ( $1 => $2 ); $name =~ s/^\s+|\s+$//g; if ( $kind eq "first" ) { $fname = $name; } else { $lname = $name; } if ( $fname and $lname ) { print "$lname, $fname\n"; ( $fname => $lname ) = map {""} 0 .. 1; } }' -- Peter Vereshagin (http://vereshagin.org) pgp: A0E26627 From liam at holoweb.net Mon May 28 17:22:05 2012 From: liam at holoweb.net (Liam R E Quin) Date: Mon, 28 May 2012 20:22:05 -0400 Subject: [tpm] find, manipulate, then output In-Reply-To: References: Message-ID: <1338250925.15863.51.camel@localhost.localdomain> On Mon, 2012-05-28 at 09:59 -0400, Antonio Sun wrote: > Here is an example that you can work on. Given the following input, > I want to output, ", " on each line. For my part, I always want readable, maintainable code. For your example, I'd use XQuery - for $book in /bookstore/book return ($book/last-name, " ", $book/first-name, " ") You could use the BaseX Perl API to run this (as an example). If you want to use regular expressions, here's a longer version: sub get-name($) { my ($book) = @_; die "get-name needs a book element" unless ($book ~= m{^\s*\s*$}; my ($first, $last) = ("", ""); if ($book =~ m{\s*([^<>]*\S)\s*}) { $first = $1; } if ($book =~ m{\s*([^<>]*\S)\s*}) { $last = $1; } my $result = $last; if ($result ne "" && $first ne "") { $result .= ", "; } $result .= first; return $result; } while ($blob =~ m{(]*>.*?)}gs) { print get-name($1); } Liam -- Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/ Pictures from old books: http://fromoldbooks.org/ Ankh: irc.sorcery.net irc.gnome.org freenode/#xml From antoniosun at lavabit.com Mon May 28 19:01:41 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Mon, 28 May 2012 22:01:41 -0400 Subject: [tpm] find, manipulate, then output In-Reply-To: <20120528165522.GC5669@external.screwed.box> References: <4FC39121.70109@verizon.net> <20120528155130.GB5669@external.screwed.box> <20120528165522.GC5669@external.screwed.box> Message-ID: On Mon, May 28, 2012 at 12:55 PM, Peter Vereshagin wrote: > AS> > AS> >> perl -ne 'print $2 . ", ". $1. "\n" while(/.../)' > AS> > AS> >> > AS> > AS> >> But I really can't work out the rest now. > AS> > AS> >> Please help. > AS> > > AS> > Sure. > AS> > > AS> > perl -Mstrict -wE 'my ( $blob, $fname, $lname ) = map {""} 0 .. 2; > while ( > AS> > my $str = <> ) { $blob .= $str; if ( $blob =~ > AS> > m/<(first|last)-name[^>]*>([^>]*) $name ) = ( $1 > => $2 > AS> > ); $name =~ s/^\s+|\s+$//g; if ( $kind eq "first" ) { $fname = > $name; } > AS> > else { $lname = $name; } $blob = ""; } if ( $fname and $lname ) { > print > AS> > "$lname, $fname\n"; ( $fname => $lname ) = map {""} 0 .. 1; } }' > AS> > > AS> > AS> Thank you Peter for your reply. > AS> > AS> I think the reason that your script is far more complicated than what I > AS> thought is because that you are handling one line at a time. I believe > that > AS> if we to slurp the whole file in and have '.' match new lines as well, > it > > why worry about '.' ? Did I miss a thing? > > AS> can be greatly simplified. > > ok I was counting on SAX rather than DOM in terms of XML parsers. At the > least you didn't specify if the input is small enough for slurping, so > memory is my concern, as always. > > And ... you needed a one-liner, right? As for me, one-liners have to be > written quickly rather than just be simple. It's not always the same in > Perl especially. > > AS> Again, I don't have my "perl notes" at hands, so I can't prove it now. > > Read File::Slurp and a perlre. > > AS> Thanks anyway. > > You're welcome. Here is handling the blob as a whole: > > perl -Mstrict -Mautodie -wE 'my ( $blob, $fname, $lname ) = map {""} 0 .. > 2; $blob .= $_ while <>; while ( $blob =~ > s/<(first|last)-name[^>]*>([^>]*) $name ) = ( $1 => $2 > ); $name =~ s/^\s+|\s+$//g; if ( $kind eq "first" ) { $fname = $name; } > else { $lname = $name; } if ( $fname and $lname ) { print "$lname, > $fname\n"; ( $fname => $lname ) = map {""} 0 .. 1; } }' > Thanks everyone for your help. I guess that I shouldn't have chosen the XML as the example. That's the one I can find/borrow without haven't to cook up my own test data. My focus was regarding working on the strings found and output the processed content, but all people seem to have been carried away by the XML. Anyway. Thanks again everyone for your help. FYI, AS> I believe that > AS> if we to slurp the whole file in and have '.' match new lines as well, > it > AS> can be greatly simplified. > This is what I meant, If we focus on working on the strings found and output the processed content: $ perl -n0777e 'print "$2, $1\n" while m{\s*(.*?)\s*\s*\s*(.*?)\s*}gs' test.txt Franklin, Benjamin Melville, Herman Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From abuzar.toronto at gmail.com Tue May 29 22:56:20 2012 From: abuzar.toronto at gmail.com (Abuzar Toronto) Date: Wed, 30 May 2012 01:56:20 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? Message-ID: Hi there, Sorry for the off-topic post, but I think some folks here would have good advice on this question. I've read that SSDs significantly improve boot up speed, application startup time, etc. Does anyone have any experience with SSDs? I don't think a few extra seconds for computer/application startup is a big deal, but it would be nice to have applications that are more responsive, run faster or more smoothly, like working in photoshop, or being able to quickly skip through large images in a slide show application, etc. The system this is for is mostly used for photoshop, gimp, blender, and a few video editing apps like adobe premier. The OSes being used are Windows, two flavors of Mint Linux, and Haiku. Is the performance boost from SSDs really anywhere close to what they're hyped up to be? Here's an example of the kinds of articles I've been reading: http://www.pcmag.com/article2/0,2817,2404258,00.asp I've read stuff like 'investing in an SSD is the single most important thing you can do to improve the overall performance of your machine, because hard drive latency is increasingly becoming a bottleneck in application performance' (not exactly those words, but you get the idea). Personally, I have a suspicion that code bloat might be a bottleneck in application performance... but, whatever. The hardware I'm working with: - Processor: A8-3870K http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=1723935&CatId=7239 - Motherboard: Gigabyte A75-UD4H http://www.canadacomputers.com/product_info.php?cPath=26_334&item_id=039551 - 8GB of Kingston DDR3 1866Mhz RAM Any advice is appreciated. Thank you!! Abuzar From stuart at morungos.com Wed May 30 06:48:10 2012 From: stuart at morungos.com (Stuart Watt) Date: Wed, 30 May 2012 09:48:10 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: Message-ID: As a MacBook Air user, I can say yes. It's a surprisingly speedy little system, considering its relatively weak CPU, and I'd put most of this down to its SSD. I also have a Mac Mini, which has about an equivalent CPU but traditional hard disk, and the responsiveness is hugely better with the MacBook Air. It's very fast at application startup. An informal poll of a few people on Twitter a while back rated the MacBook Air as more responsive than the MacBook Pro, which had a much faster, and quad core, CPU. You are probably right that it is code bloat, but the code has to get off the disk somehow, and if its bloated, that will take longer than if it isn't. If disk latency is an issue, SSD is a good plan. For video editing, there will be no point - that's almost entirely CPU bound. Possibly the same is true for most image editing. But for general application responsiveness -- especially for large applications which might otherwise be hit by paging -- browsing data and so on, SSD works well for me. All the best Stuart On 2012-05-30, at 1:56 AM, Abuzar Toronto wrote: > Hi there, > > Sorry for the off-topic post, but I think some folks here would have > good advice on this question. I've read that SSDs significantly > improve boot up speed, application startup time, etc. Does anyone > have any experience with SSDs? > > I don't think a few extra seconds for computer/application startup is > a big deal, but it would be nice to have applications that are more > responsive, run faster or more smoothly, like working in photoshop, or > being able to quickly skip through large images in a slide show > application, etc. > > The system this is for is mostly used for photoshop, gimp, blender, > and a few video editing apps like adobe premier. The OSes being used > are Windows, two flavors of Mint Linux, and Haiku. > > Is the performance boost from SSDs really anywhere close to what > they're hyped up to be? Here's an example of the kinds of articles > I've been reading: > http://www.pcmag.com/article2/0,2817,2404258,00.asp > > I've read stuff like 'investing in an SSD is the single most important > thing you can do to improve the overall performance of your machine, > because hard drive latency is increasingly becoming a bottleneck in > application performance' (not exactly those words, but you get the > idea). > > Personally, I have a suspicion that code bloat might be a bottleneck > in application performance... but, whatever. > > The hardware I'm working with: > - Processor: A8-3870K > http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=1723935&CatId=7239 > - Motherboard: Gigabyte A75-UD4H > http://www.canadacomputers.com/product_info.php?cPath=26_334&item_id=039551 > - 8GB of Kingston DDR3 1866Mhz RAM > > Any advice is appreciated. > > Thank you!! > Abuzar > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm From legrady at gmail.com Wed May 30 09:41:45 2012 From: legrady at gmail.com (Tom Legrady) Date: Wed, 30 May 2012 12:41:45 -0400 Subject: [tpm] Comparing db data Message-ID: I' ve occasionally written ustom code to compare two databases or a db and source file. Its always a pain, checking for nulls, distinguishing between strings and numbers amd other special cases. Ard there any modules which simplify the process? From olaf at vilerichard.com Wed May 30 14:46:37 2012 From: olaf at vilerichard.com (Olaf Alders) Date: Wed, 30 May 2012 17:46:37 -0400 Subject: [tpm] presentation slides (contrast) Message-ID: <0E07FCB7-D532-42D0-A1AC-18AB56F5AAD5@vilerichard.com> I'm just putting my slides for tomorrow together. What's the consensus on slide contrast and colour choice? I'm using a dark solarized theme for my terminal, but I'm not sure how well people will be able to read that on the projector. Thoughts? Figured I'd ask before I make *all* my slides. Light text on dark background? Dark text on light background? Olaf -- Olaf Alders olaf at vilerichard.com http://vilerichard.com -- folk rock http://twitter.com/vilerichard http://cdbaby.com/cd/vilerichard From jkeen at verizon.net Wed May 30 15:38:33 2012 From: jkeen at verizon.net (James E Keenan) Date: Wed, 30 May 2012 18:38:33 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: Message-ID: <4FC6A169.2010206@verizon.net> On 5/30/12 9:48 AM, Stuart Watt wrote: > As a MacBook Air user, I can say yes. It's a surprisingly speedy little system, considering its relatively weak CPU, and I'd put most of this down to its SSD. I also have a Mac Mini, which has about an equivalent CPU but traditional hard disk, and the responsiveness is hugely better with the MacBook Air. It's very fast at application startup. An informal poll of a few people on Twitter a while back rated the MacBook Air as more responsive than the MacBook Pro, which had a much faster, and quad core, CPU. You are probably right that it is code bloat, but the code has to get off the disk somehow, and if its bloated, that will take longer than if it isn't. If disk latency is an issue, SSD is a good plan. > > For video editing, there will be no point - that's almost entirely CPU bound. Possibly the same is true for most image editing. But for general application responsiveness -- especially for large applications which might otherwise be hit by paging -- browsing data and so on, SSD works well for me. > > All the best > Stuart > > > > On 2012-05-30, at 1:56 AM, Abuzar Toronto wrote: > >> Hi there, >> >> Sorry for the off-topic post, but I think some folks here would have >> good advice on this question. I've read that SSDs significantly >> improve boot up speed, application startup time, etc. Does anyone >> have any experience with SSDs? I don't know what SSD means in this context? Can anyone explain? jimk (who is contemplating buying a new Mac laptop and wants to know how much machine to buy) From mike at stok.ca Wed May 30 15:40:37 2012 From: mike at stok.ca (Mike Stok) Date: Wed, 30 May 2012 18:40:37 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: <4FC6A169.2010206@verizon.net> References: <4FC6A169.2010206@verizon.net> Message-ID: <7496609C-5717-4777-83CA-DFEE3E6D266A@stok.ca> Solid State Drive. On 2012-05-30, at 6:38 PM, James E Keenan wrote: > On 5/30/12 9:48 AM, Stuart Watt wrote: >> As a MacBook Air user, I can say yes. It's a surprisingly speedy little system, considering its relatively weak CPU, and I'd put most of this down to its SSD. I also have a Mac Mini, which has about an equivalent CPU but traditional hard disk, and the responsiveness is hugely better with the MacBook Air. It's very fast at application startup. An informal poll of a few people on Twitter a while back rated the MacBook Air as more responsive than the MacBook Pro, which had a much faster, and quad core, CPU. You are probably right that it is code bloat, but the code has to get off the disk somehow, and if its bloated, that will take longer than if it isn't. If disk latency is an issue, SSD is a good plan. >> >> For video editing, there will be no point - that's almost entirely CPU bound. Possibly the same is true for most image editing. But for general application responsiveness -- especially for large applications which might otherwise be hit by paging -- browsing data and so on, SSD works well for me. >> >> All the best >> Stuart >> >> >> >> On 2012-05-30, at 1:56 AM, Abuzar Toronto wrote: >> >>> Hi there, >>> >>> Sorry for the off-topic post, but I think some folks here would have >>> good advice on this question. I've read that SSDs significantly >>> improve boot up speed, application startup time, etc. Does anyone >>> have any experience with SSDs? > > I don't know what SSD means in this context? Can anyone explain? > > jimk > (who is contemplating buying a new Mac laptop and wants to know how much machine to buy) > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm -- Mike Stok http://www.stok.ca/~mike/ The "`Stok' disclaimers" apply. From jkeen at verizon.net Wed May 30 15:46:42 2012 From: jkeen at verizon.net (James E Keenan) Date: Wed, 30 May 2012 18:46:42 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: <7496609C-5717-4777-83CA-DFEE3E6D266A@stok.ca> References: <4FC6A169.2010206@verizon.net> <7496609C-5717-4777-83CA-DFEE3E6D266A@stok.ca> Message-ID: <4FC6A352.4010702@verizon.net> On 5/30/12 6:40 PM, Mike Stok wrote: > Solid State Drive. > Thanks, Mike. > On 2012-05-30, at 6:38 PM, James E Keenan wrote: > >> On 5/30/12 9:48 AM, Stuart Watt wrote: >>> As a MacBook Air user, I can say yes. It's a surprisingly speedy little system, considering its relatively weak CPU, and I'd put most of this down to its SSD. I also have a Mac Mini, which has about an equivalent CPU but traditional hard disk, and the responsiveness is hugely better with the MacBook Air. It's very fast at application startup. An informal poll of a few people on Twitter a while back rated the MacBook Air as more responsive than the MacBook Pro, which had a much faster, and quad core, CPU. You are probably right that it is code bloat, but the code has to get off the disk somehow, and if its bloated, that will take longer than if it isn't. If disk latency is an issue, SSD is a good plan. >>> >>> For video editing, there will be no point - that's almost entirely CPU bound. How about for compiling Perl 5, Parrot or Perl 6 from source code? That's my heaviest-duty processing need. jimk From dave.s.doyle at gmail.com Wed May 30 15:42:49 2012 From: dave.s.doyle at gmail.com (Dave Doyle) Date: Wed, 30 May 2012 18:42:49 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: <4FC6A169.2010206@verizon.net> References: <4FC6A169.2010206@verizon.net> Message-ID: On 30 May 2012 18:38, James E Keenan wrote: > > I don't know what SSD means in this context? Can anyone explain? > Solid State Drive. Flash memory hard drive instead of the spinning disk. :) And for the record: SSD is a huge performance increase when you do ANYTHING I/O based. I mean it. My MBP flies. Yes, it has a nicer processor than my old 'puter, but folks who have the same model and the traditional spinning disk HD have commented on the speed of my boot. Having attempted to process CPAN before by expanding every tarball, I can tell you the difference is stunning between my old iMac and my MBP. D -- dave.s.doyle at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From legrady at gmail.com Wed May 30 16:33:19 2012 From: legrady at gmail.com (Tom Legrady) Date: Wed, 30 May 2012 19:33:19 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: I have heard that SSDs have limited lifetimes, compared to disk, only so many read-write cycles. Don't know how significant that is in actual use. Tom On Wed, May 30, 2012 at 6:42 PM, Dave Doyle wrote: > > On 30 May 2012 18:38, James E Keenan wrote: > >> >> I don't know what SSD means in this context? Can anyone explain? >> > > Solid State Drive. Flash memory hard drive instead of the spinning disk. > :) > > And for the record: SSD is a huge performance increase when you do > ANYTHING I/O based. I mean it. My MBP flies. Yes, it has a nicer > processor than my old 'puter, but folks who have the same model and the > traditional spinning disk HD have commented on the speed of my boot. > > Having attempted to process CPAN before by expanding every tarball, I can > tell you the difference is stunning between my old iMac and my MBP. > > D > > -- > dave.s.doyle at gmail.com > > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adnan.kobir at gmail.com Wed May 30 16:50:00 2012 From: adnan.kobir at gmail.com (Adnan Kobir) Date: Wed, 30 May 2012 19:50:00 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: Also it depends on the type of SSD your buying. I bought a ocz revodrive x4 and it just blew away every other SSD. On May 30, 2012 7:33 PM, "Tom Legrady" wrote: > I have heard that SSDs have limited lifetimes, compared to disk, only so > many read-write cycles. Don't know how significant that is in actual use. > > Tom > > On Wed, May 30, 2012 at 6:42 PM, Dave Doyle wrote: > >> >> On 30 May 2012 18:38, James E Keenan wrote: >> >>> >>> I don't know what SSD means in this context? Can anyone explain? >>> >> >> Solid State Drive. Flash memory hard drive instead of the spinning disk. >> :) >> >> And for the record: SSD is a huge performance increase when you do >> ANYTHING I/O based. I mean it. My MBP flies. Yes, it has a nicer >> processor than my old 'puter, but folks who have the same model and the >> traditional spinning disk HD have commented on the speed of my boot. >> >> Having attempted to process CPAN before by expanding every tarball, I can >> tell you the difference is stunning between my old iMac and my MBP. >> >> D >> >> -- >> dave.s.doyle at gmail.com >> >> >> _______________________________________________ >> toronto-pm mailing list >> toronto-pm at pm.org >> http://mail.pm.org/mailman/listinfo/toronto-pm >> >> > > _______________________________________________ > toronto-pm mailing list > toronto-pm at pm.org > http://mail.pm.org/mailman/listinfo/toronto-pm > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ceeshek at gmail.com Wed May 30 17:52:03 2012 From: ceeshek at gmail.com (Cees Hek) Date: Thu, 31 May 2012 10:52:03 +1000 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: On Thu, May 31, 2012 at 9:33 AM, Tom Legrady wrote: > I have heard that SSDs have limited lifetimes, compared to disk, only so > many read-write cycles. Don't know how significant that is in actual use. They do, but these days you will more than likely be replacing your system before you run into any issues with read/write cycles on an SSD. My current laptop (Dell XPS 1640) was purchased three years ago and came with a 240Gb SSD drive which I have never had any issues with (note that this is anecdotal and doesn't prove there aren't any issues with some SSD drives). I also have an SSD in my work computer and my media center... I think the read/write cycles are more of an issue in enterprise situations. And enterprise grade SSDs have contingencies for this by adding extra space on the drives that are used when needed. Also files that change often are moved around the drive so that reads and writes are more evenly distributed across the entire drive. Personally I wouldn't build a system these days without an SSD. For a desktop I would buy a decent but small SSD for the OS (60-120Gb), and then stick a 2Tb spindle disk for data. For a laptop it is trickier since you only have room for one drive so you have to go bigger which is still expensive... But if you can afford it, the speed, low power, silent operation of an SSD are a huge asset in a laptop. Cheers, Cees ps. my 3 year old laptop cold boots into a fully loaded Ubuntu desktop in ~14 seconds (that includes a second or so for me to type in my password at the login prompt). LibreOffice and Firefox take ~1 second each to load. > > Tom > From olaf.alders at gmail.com Wed May 30 18:41:59 2012 From: olaf.alders at gmail.com (Olaf Alders) Date: Wed, 30 May 2012 21:41:59 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: On 2012-05-30, at 8:52 PM, Cees Hek wrote: > Personally I wouldn't build a system these days without an SSD. For a > desktop I would buy a decent but small SSD for the OS (60-120Gb), and > then stick a 2Tb spindle disk for data. For a laptop it is trickier > since you only have room for one drive so you have to go bigger which > is still expensive... But if you can afford it, the speed, low power, > silent operation of an SSD are a huge asset in a laptop. With a MacBook there's always the option of replacing the DVD/optical drive with either an SSD or an optical drive, which is what I have been thinking about trying. http://www.mcetech.com/optibay/ Best, Olaf -- Olaf Alders olaf at vilerichard.com http://vilerichard.com -- folk rock http://twitter.com/vilerichard http://cdbaby.com/cd/vilerichard From shantanu at cpan.org Wed May 30 20:40:29 2012 From: shantanu at cpan.org (Shantanu Bhadoria) Date: Thu, 31 May 2012 11:40:29 +0800 Subject: [tpm] Comparing db data Message-ID: If you are using mysql as your database you must use mysql workbench to compare merge and diff DBs, its free. I found it very useful myself. There are commercial options available too but they are more expensive like redgate. Message: 3 > Date: Wed, 30 May 2012 12:41:45 -0400 > From: Tom Legrady > To: toronto-pm at pm.org > Subject: [tpm] Comparing db data > Message-ID: > Content-Type: text/plain; charset=utf-8 > > I' ve occasionally written ustom code to compare two databases or a db > and source file. Its always a pain, checking for nulls, distinguishing > between strings and numbers amd other special cases. > > Ard there any modules which simplify the process? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoniosun at lavabit.com Wed May 30 20:53:55 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Wed, 30 May 2012 23:53:55 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: On Wed, May 30, 2012 at 6:42 PM, Dave Doyle wrote: > I don't know what SSD means in this context? Can anyone explain? >> > > Solid State Drive. Flash memory hard drive instead of the spinning disk. > :) > > And for the record: SSD is a huge performance increase when you do > ANYTHING I/O based. I mean it. My MBP flies. Yes, it has a nicer > processor than my old 'puter, but folks who have the same model and the > traditional spinning disk HD have commented on the speed of my boot. > I still don't quite get why there is such performance boost. So Stuart, your MacBook Air, and Dave, your MBP, do they only contain SSD, not ordinary spinning disks? investing in an SSD is the single most important > thing you can do to improve the overall performance of your machine > For any old Linux box, with several TB of HD, will investing in an SSD really make much different? I think the only thing that SSD can help such box is to use the whole SSD as swap. In such case, I don't think such setup will have much improvement on boot up speed, or application startup time. Just thinking out loud here. Anyone can help me out here? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoniosun at lavabit.com Wed May 30 21:07:04 2012 From: antoniosun at lavabit.com (Antonio Sun) Date: Thu, 31 May 2012 00:07:04 -0400 Subject: [tpm] presentation slides (contrast) In-Reply-To: <0E07FCB7-D532-42D0-A1AC-18AB56F5AAD5@vilerichard.com> References: <0E07FCB7-D532-42D0-A1AC-18AB56F5AAD5@vilerichard.com> Message-ID: On Wed, May 30, 2012 at 5:46 PM, Olaf Alders wrote: > I'm just putting my slides for tomorrow together. What's the consensus on > slide contrast and colour choice? I'm using a dark solarized theme for my > terminal, but I'm not sure how well people will be able to read that on the > projector. Thoughts? Figured I'd ask before I make *all* my slides. > > Light text on dark background? > Dark text on light background? > Personally I prefer Light text on dark background, and have thought that it'd work fine for projectors as well. But it turns out not that great when projected on the wall. Maybe bolding the text for your presentation may help? -------------- next part -------------- An HTML attachment was scrubbed... URL: From liam at holoweb.net Wed May 30 21:53:56 2012 From: liam at holoweb.net (Liam R E Quin) Date: Thu, 31 May 2012 00:53:56 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: <1338440036.19196.30.camel@localhost.localdomain> On Wed, 2012-05-30 at 23:53 -0400, Antonio Sun wrote: > For any old Linux box, with several TB of HD, will investing in an SSD > really make much different? Probably, yes. Disk access tends to be a major bottleneck on Unix-like systems. The traditional trick was to put /bin and /etc on a faster disk, along with /tmp and swap. For Linux, /usr/lib64 would be a good candidate. But it depends on your usage. If you have 16 TB of RAM and only 12T of disk it'll go even faster if you have a background process to access all of disk to get it all into the cache... but most of us have massively more disk than memory. Boot time is only a few seconds these days anyway, but saving a fraction of a second on starting every process would very quickly add up. Liam -- Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/ Pictures from old books: http://fromoldbooks.org/ -- Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/ Pictures from old books: http://fromoldbooks.org/ Ankh: irc.sorcery.net irc.gnome.org freenode/#xml From stuart at morungos.com Thu May 31 07:26:14 2012 From: stuart at morungos.com (Stuart Watt) Date: Thu, 31 May 2012 10:26:14 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: On 2012-05-30, at 11:53 PM, Antonio Sun wrote: > Stuart, your MacBook Air, and Dave, your MBP, do they only contain SSD, not ordinary spinning disks? Yes for me. In the MacBook Air, there's actually no room for a traditional hard drive. > For any old Linux box, with several TB of HD, will investing in an SSD really make much different? It depends on the use. Several TB of SSD will cost you. I'd consider these issues: 1. Random access v sequential access - if you are accessing data sequentially, as is common in e.g., video processing, bioinformatics, etc. then disk latency doesn't really matter. There could be a bandwidth issue (see #2) if the processing is light, but if it is intensive, there really is little benefit. Random access, e.g., databases, file systems, gain significantly, as they don't spend time waiting for the disk heads to move to the right places. 2. Bandwidth - a second major benefit of SSD is increased data bandwidth. That can be significant if you have a decent processor, but then a RAID array might deliver enough data to keep the system more than busy. Most SSDs are constructed like mini RAID arrays because horizontal scaling doesn't require complex mechanical systems. I've used 8-core systems with RAID arrays for a good range of applications in information retrieval, and most of the time they were CPU bound, so SSDs would not have made them any faster. 3. Write versus read - cheaper SSDs are much (~10x) faster for reading compared to writing, expensive ones level that out. If you're doing a lot of writing, that could be a factor, but if mostly reading, then you probably have a better price point available for the benefit. 4. Cost - SSDs are more expensive, especially high-performance ones. For terabytes of data, is it worth it? Well, it depends on how you use it - see #1 and #2 above. 5. Power - SSDs work especially well in laptops as they don't need as much juice. If I really wanted a truly high performance system, I'd ask whether SSD is really the right answer, as you're still shovelling all the data through the same data channels, and I/O bandwidth could become a bottleneck. You might be better with a small cluster for some tasks. For laptops, SSDs are wonderful and there are few drawbacks, but for servers and workstations, it depends on what they are being used for. All the best Stuart From arocker at Vex.Net Thu May 31 08:19:40 2012 From: arocker at Vex.Net (arocker at Vex.Net) Date: Thu, 31 May 2012 11:19:40 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: References: <4FC6A169.2010206@verizon.net> Message-ID: <56b45030d405c486122bd435539f0ffe.squirrel@mail.vex.net> > > 2. Bandwidth - a second major benefit of SSD is increased data bandwidth. > The extreme case is units like the Fusion-io board. If I understand the idea correctly, it functions like a disk, but eliminates most of the intermediate handling that a disk requires, going straight to/from the bus, at bus speeds. Mega-performance for mega-dollars, and it's a plug-in, so probably won't fit your laptop. From abuzar at abuzar.com Thu May 31 14:58:45 2012 From: abuzar at abuzar.com (Abuzar) Date: Thu, 31 May 2012 17:58:45 -0400 Subject: [tpm] OT: Are SSDs really worth purchasing to speed up our computing experience? In-Reply-To: <56b45030d405c486122bd435539f0ffe.squirrel@mail.vex.net> References: <4FC6A169.2010206@verizon.net> <56b45030d405c486122bd435539f0ffe.squirrel@mail.vex.net> Message-ID: Thanks Stuart and Dave and everyone else for the review.... sounds like I should run out to buy this SSD on sale today, which I'll do right now :-) Much appreciated, thank you all for sharing info :-)