[Melbourne-pm] Error checks without packages
toby.corkindale at strategicdata.com.au
Mon Aug 1 17:57:17 PDT 2011
What are you looking to optimise here? It sounds like you're
concentrating on start-up/initialisation performance?
On 02/08/11 10:37, Tim Hogard wrote:
>> Isn't Perl's IO (from v5.7) almost completely reliant on "PerlIO"... aka,
>> which means that open() is using PerlIO anyway? So including any IO package
>> has minimal overhead (making the choice of "should I load an IO package?"
>> largely irrelevant)?
> I was thinking that was the case too but asking for the module and truss say
> there is way more overhead than there should be.
> There is a way to build modules into the core binary if I need it.
> I've decided that if I test the close and it fails, I still have
> all the data so I can attempt to write it someplace else. I'm more
> interested in knowing that something went wrong than exactly what
> failed (like a disk failure which will show up a different way soon
> after the fact, but not before)
> I was looking at overhead from adding modules and I see lots
> of bloat and a potental to blow out under heavy loads.
> On Solaris there are 3 classes of system calls: slow, fast and very
> slow falls in to waiting and non-waiting so things like
> open will wait unless the OS already has the resource open.
> fast is things like brk() which means your process should
> be back after two task swtiches which are quick if runque is
> about the same size as the number of CPUs but can blow out
> when your runque is larger.
> The very quick is calls like time() which does a context
> swtich, copys data from system space to user space and
> then context switch back without reordinger the runque.
> I should track down why I'm seeing getuid() being called
> twice followed by and getgid() being repeated.
> Its also odd that the brk() calls on startup aren't turned
> so their is a chain of requests for memory.
> I've been tracking down shared lib issues as well and many will go
> read config files. It turns out that even if a program kepps opening
> and closing the file millions of times, the caching isn't as good
> as when there is another process that simply opens the files, reads
> the first block and then sleeps forever.
> Melbourne-pm mailing list
> Melbourne-pm at pm.org
More information about the Melbourne-pm