Working on a large project, we were adding a large new feature. Extending an old feature with a lot more power. Except it was taking forever. So we decided to look into what the app was doing, compare the two (old feature vs new feature), see how bad of a difference it really was.
Timing it, it turned out to only be 12 vs 15 seconds in the worst case scenarios we could think of to try. OTOH, for a webapp, they’re both horrendous answers. That’s when we decided to look deeper.
Code profiling is that deeper step. It provides usage statistics on how long each function takes, each callout, etc. In the past we’d all done ad-hoc profiling by putting in some debug statements in places we thought were taking the longest.
print “STARTING FUNCTION XXX ” + now();
print “EXITING FUNCTION XXX” + now();
The problem with this, obviously, is that it requires foresight or backpatching.
Foresight because you can put this in every function with a call to a pair of defined functions that you null-out in production.
function enter_function(name) {
if DEBUG {
print “ENTERING ” + name + ” ” + now();
}
}
That way you get the output, but it goes away in production (when debug is presumably set to false). Of course, now you always have it for all functions (assuming it’s part of your function template so it always gets added)
The second option is backpatching. Going through your functions and adding either the prints or the function call. Which gives you the same problem, but now you’re no longer sure that you’re hitting all of the functions.
The third option is modifying the compiler/interpreter so that it puts in that code for you. Which is, honestly, what we did with it. Because it’s easier than drinking from the firehose or attaching the hose to a kitchen faucet.