[tpm] Looking for Cpan Module for Parsing Metadata of Ebooks, Audiobooks and Comics
xerofoify at gmail.com
Sun Jun 4 17:40:47 PDT 2017
On 2017-06-04 03:27 PM, Liam R E Quin wrote:
> On Sun, 2017-06-04 at 11:32 -0400, nick wrote:
>> On 2017-06-03 09:24 PM, Liam R E Quin wrote:
>>> If it's too slow you could use a database - for a fast tree store I
>>> BaseX with the Perl API,
>> Good point Liam. I do have a copy of Programming Perl so I will start
>> looking through that again
>> after I refresh my knowledge with Learning Perl. It's been a few
>> years so I would like a refresher:).
>> However on that note coming from a C++/C/Java background why not just
>> run a profiler and see what
>> is causing slowdowns and than decide based on that what optimizations
>> to take.
> If you're doing (say) a text search with a regular expression over
> 100MBytes of text, on a system less than (say) 5 or 10 years old, and
> not running so many other things it's going as slowly as a blind slave
> in a sulphur mine, you should get an answer in well under a second.
> If you have 100,000 files it will take longer because of the system
> overhead of opening 10,000 files.
> Profiling alone won't show you when to choose a different strategy.
> At any rate it'll probably be fine; I just wanted to give you something
> to keep in mind.
Profiling is a tool like any tool it's only useful to something who understands its
use and limitations. Profiling only gives you an idea of where possible slowdowns are,
all programming tools give false warnings. It's the programmer's job to find out which
are actually valuable and which are not.
My point was using a profiler is useful but not useful by itself. Perl does
have a debugger too but it's up to the programmer to use it effectively not the
tool itself. Even in C++/C with large programs its up to the programmer to decide
what warnings are useful when the compiler complains.
P.S. I am ccing the list as I forget to so others can join in if they wish.
More information about the toronto-pm