[sb.pm] When should we meet next?

Robert Rothenberg wlkngowl at i-2000.com
Thu Apr 15 16:54:51 CDT 2004



On 4/14/2004 3:11 PM Siddhartha Basu wrote:

> Most definitely. But at this moment i am facing more of a design problem 
> rather than code related. I think i can discuss it here so that i can ge 
> some help with a better approach.  ...

> * What will be a good approach for taking inputs for a program specially 
> if i want to run it from a cron job.
>     * From the command line by using @ARGV or Getopt::Long.
>     * Or by setting environmental variables and accessing from $ENV
>         hash.
> Right now, i have mix of both but it is becoming cumbersome and i am 
> thinking about having settling down with one approach.

I'd use command-line arguments, since it's easier to pass arguments to when 
running the scripts as one-shots.

> * Sometimes i have do read a flat text file line by line, split the 
> column and then check whether the data in that column is present in 
> another text file. So, my approach is to either read the second text 

BioPerl has some flat database file drivers.  DBD::CSV, DBD::Sprite or 
DBD::File might provide some DBI drivers to handle flat files and simplify 
your work.

> file to a dbm hash or to mysql database. The downside what i am facing 
> is that i have to write a script for every possible text file to be 
> searched. Text files with varying format also compounds the problem. 
> Nothing else comes to my mind at this moment so i am dealing with a 
> bunch of scattared scripts.

Where are these text files coming from that they are in different formats?

> * What kind of format should i use for writing log file. Flat text file 
> or xml format.

I'd avoid XML like the plague, unless you need to pass the logs to a program 
which requires it in XML.

Do you really need sophisticated markup for a log file?

Another alternative is YAML, which is more human-readable. See 
http://yaml.org/ for more information.







More information about the StonyBrook-PM mailing list