Phoenix.pm: perlcc

intertwingled at qwest.net intertwingled at qwest.net
Sat Mar 8 05:55:00 CST 2003


Scott Walters wrote:

> Hi folks.
>
> I'm trying to squeeze every ounce of performance possible out of my server
> machine, the Apple 7300/180 (the 180 is for 180 mhz).
>
> I have one script that is getting hit a lot and should respond quickly,
> and supposedly, Perl spends a lot if not most of its time just parsing
> the source code, so I gave perlcc a whirl.
>
> To those not familiar, perlcc lets perl parse the program, than dumps
> the symbol table and opcode tree as a bunch of C datastructures,
> pre initialized, along with the glue needed to link to libperl,
> saving a program from having to parse the .pl file.
>
>                   normal            perlcc binary
> wiki.cgi       .301 + .678          .361 + 1.547
> assemble.cgi   1.607 + 1.191       1.653 + 2.509
>
> Time is user time + system time.
>
> Instead of being faster, a compiled binary is slower.
>
> The perlcc binary takes marginally more user time, but more than twice as
> much system time. Short of profiling the exe and perl, which I don't even
> know how to do, does anyone know why this might be? The binaries
> are around 2 megs, so they should be competitive with the memory footprint
> of perl itself. Perhaps time spent in ld, linking to the shared library?
> Time spent doing VM stuff?
>
> Of course, I could do FastCGI or something, but I'm curious =)
>
> It's hard to imagine that the Perl parser is faster at constructing an
> opcode tree than just bringing an already compiled one in from disc.
>
> Thanks,
> -scott

Rewrite it in C or assembly language.  It is the Only Way, my son.




More information about the Phoenix-pm mailing list