Debugging Memory Use in Perl - Help!


Cross-posting this from a Stack Overflow Question asked by a colleague.

Some processes that run for multiple hours (ETL jobs) suddenly started consuming a lot more RAM than usual. Analysis of the changes in the relevant release is a slow and frustrating process. I am hoping to identify the culprit using more automated analysis.

Our live environment is perl 5.14 on Debian squeeze.

I have access to lots of OS X 10.5 machines, though. Dtrace and perl seem to play together nicely on this platform. Seems that using dtrace on linux requires a boot more work. I am hoping that memory allocation patterns will be similar between our live system and a dev OS X system - or at least similar enough to help me find the origin of this new memory use.

This slide deck:

shows how to use dtrace do show number of calls to malloc by perl sub. I am interested in tracking the total amount of memory that perl allocates while executing each sub over the lifetime of a process.

Any ideas on how this can be done?

We make great use of NYTProf for profiling the speed of our processes, but unfortunately there doesn't seem to be anything anywhere near that good for memory consumption.

Does anybody have any advice for "profiling" memory use in Perl?



Leave a comment

About Alex Balhatchet

user-pic CTO at Lokku Ltd.