Conditionally performing a procedure inside a function

In summary, the author recommends that you should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet he also recommends that you should not pass up your opportunities in that critical 3% of your program.
  • #1
Jamin2112
986
12
Suppose I have a function that gets called and performs a procedure provided that a certain condition is met. One way to do it is, of course, like the following real example from something I'm working on in JavaScript right now.

Code:
if (window.File && window.FileReader && window.FileList && window.Blob) {
  //do your stuff!
} else {
  alert('The File APIs are not fully supported by your browser.');
}

That's how they do it here http://www.htmlgoodies.com/beyond/j...e-javascript-filereader.html#fbid=gN9gIXxlV_E and of course it's perfectly valid.

However, to me, it seems weird to have it written that way, because the way I think about things like this is, "Check that I'm not prevented from doing X, and if I am, make a note about it, otherwise continue onto doing X." So I prefer to use an early
Code:
return
statement in languages that support it, such as JavaScript. I do this:

Code:
if (!window.File || !window.FileReader || !window.FileList || !window.Blob)
{
  alert('The File APIs are not fully supported by your browser.');
  return;
} 
else 
{
   // do the procedure
}

I'm just wondering whether one is definitely better than another or whether my mind is starting to become poisoned with religion after reading too many threads here haha. I also realize that the first piece of code is faster because
Code:
window.File && window.FileReader && window.FileList && window.Blob
is, on average, executed faster than
Code:
!window.File || !window.FileReader || !window.FileList || !window.Blob
.

Should I even be asking this question or am I becoming overly concerned about things that don't matter?
 
Technology news on Phys.org
  • #2
if (A || B || C || D)
{
// do something
}
else
{
// do something else
}
return;

The above is risky in some languages, because if condition A or B fails (just for example), C or D will never be evaluated, which in some cases is not the desired behavior.

Also, unless you have objectively measured performance of a code structure executing in every browser, I wouldn't be too sure about what executes faster than what else. Even if it does, is it more significant than readable code?
 
  • #3
harborsparrow said:
if (A || B || C || D)
{
// do something
}
else
{
// do something else
}
return;

The above is risky in some languages, because if condition A or B fails (just for example), C or D will never be evaluated, which in some cases is not the desired behavior.
I thought that was the case with && in Java, but I didn't know it also happened with || in some languages. A bug like that in anyone's code could easily bring hours of headaches.
 
  • #4
That both && and || (or their equivalent in some other language) are short circuit operators is true in *most* languages. Some few provide both eager and short-circuit versions (e.g., and versus and then in Ada), and one (Fortran) leaves the decision as to whether the operation is eager or short circuit up to the compiler.

Note that C++ boolean expressions are short circuit only with respect to built-in types. Expressions involving overloaded operator&& or operator|| are not short circuit. (You should never overload those two operators, or the ternary operator, for this very reason).Jamin2112 - You worry to much about micro optimizations. Donald Knuth wrote "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

Is this boolean expression part of the critical 3% of your program?
 
  • #5
harborsparrow said:
if (A || B || C || D)
{
// do something
}
else
{
// do something else
}
return;

The above is risky in some languages, because if condition A or B fails (just for example), C or D will never be evaluated, which in some cases is not the desired behavior.

Psinter said:
I thought that was the case with && in Java, but I didn't know it also happened with || in some languages. A bug like that in anyone's code could easily bring hours of headaches.

If you write code that only "works" because the logical tests in your IF statements perform the correct side effects, you deserve everything you get IMHO.
 
  • #6
D H said:
You worry to much about micro optimizations.

I once worked on developing a code over several years, which demonstrated that fact big-time.

The first version spent about 95% of its execution time in just one small loop, which was three lines of code out of a total of about 200,000.

On average, we speeded up the run time of the complete code by a factor of about 2x per year, for several years. The final version ran more than 100x faster than the original. Those improvements were scaled to the same computer hardware - the actual speed up was close to 1000x including the hardware performance improvements as well.

So what did we do to those 3 lines of code to make the program run 100 times faster? NOTHING. What we did was improve the high level algorithms to solve the complete problem, so those lines of code were called less often.
 
  • #7
AlephZero said:
I once worked on developing a code over several years, which demonstrated that fact big-time.

The first version spent about 95% of its execution time in just one small loop, which was three lines of code out of a total of about 200,000.
Very similar experience. A dozen lines of code out of a similarly sized code base was responsible for 90% of the CPU usage. Those dozen or lines were in a spherical harmonics expansion function. The code looked something like this:
Code:
for (int ii = 2; ii <= degree; ++ii) {
   // Lots of code elided
   for (int jj = 1; jj <= order; ++jj) {
      // Lots of code elided
      sum_gamma += (jj+ii+1) * Pnm[ii][jj] *
                   (Cnm[ii][jj] * C_tilde[jj] + S_nm[ii][jj] * S_tilde[jj]);
      // Lots of code elided
   }
   // Lots of code elided
}
The code had to calculate the potential, it's gradient, and the Hessian. Some of our users insisted on compiling unoptimized because a bug traced to an optimizing compiling supposedly crashed a satellite some twenty years ago.

The Pnm, Cnm, Snm, and such are data members. When compiled unoptimized, we could see the assembly code extracting the value Pnm[ii][jj] as if we had written *(*((*this).Pnm+ii)+jj). We were going through three pointers! We cached those things so that only one indirection was performed in the inner loop. This also helped with the performance when the code was compiled optimized. But that wasn't enough. The code was still too slow.

Next problem: There were lots of common expressions that could not possibly have been identified as such and calculated once when the code was compiled unoptimized. Even if we did enable optimization, we saw that the compiler wasn't seeing a lot of those common expressions as such, probably because of worries about aliasing. So we manually collected those common expressions.

And it was still too slow. The cause? It turned out the remaining drain on the CPU involved multiplying an integer and a double. There are lots of such multiplications in a spherical harmonics expansion. Converting an integer to a double turns out to be *expensive*. The solution was to build an int_to_double array at instance initialization time. We knew the minimum (zero) and maximum values to be converted at that time. Doing a lookup in an array that maps 0 to 0.0, 1 to 1.0, etc., turned out to be a much, much faster conversion than the incredible amount of bit mangling needed to convert an integer to a double.

Did we carry this notion forward to other places where we multiplied an integer and a double? No. It makes the code look ugly and harder to understand. We did it in that spherical harmonics code because we had to. It would have been a case of premature optimization anywhere else.
 
Last edited:
  • #8
D H said:
Very similar experience. A dozen lines of code out of a similarly sized code base was responsible for 90% of the CPU usage.

Similar but different. In my case the loop was effectively just a BLAS level 1 "AXPY" operation. Replacing it with a call to a BLAS routine did nothing except add the overhead of a function call.

But attacking the complete problem from the other end, and figuring out how to make a solution algorithm that converged in 20 iterations (taking about an hour each) converge in 10, gave the first fiactor of 2. The others were from similar high level changes, but too complicated to explain in one paragraph.

After somebody on the team had figured out how to get one of those savings, the consensus was usually "OK, that's the end point, there's nothing else we can do here" - until 6 months later somebody said "hey, what if we tried ...". Insight develops at its own pace. You can't make it happen by putting milestones on a project management chart.
 
  • #9
AlephZero said:
Similar but different.
True. That small chunk of our code is now ugly as sin. That's almost always the inevitable result of programming for utmost efficiency. Your code is still nice and clean. That is oftentimes a more common result from seeing some hoggish function. You take one look at the code and go :eek: : "They're using bubble sort to sort a list containing thousands of elements!" or "They're calling new and delete inside a triply nested loop!".
You can't make it happen by putting milestones on a project management chart.
That you can't make insights happen on demand doesn't seem to stop project management from expecting 1.3 deep insights per developer per month.
 
  • #10
AlephZero said:
So what did we do to those 3 lines of code to make the program run 100 times faster? NOTHING. What we did was improve the high level algorithms to solve the complete problem, so those lines of code were called less often.

Rule number one: code that is never executed doesn't take processor time.
 
  • #11
AlephZero said:
Insight develops at its own pace.

I think I am going to hang it on the wall here.
 

1. What is conditional performance in a function?

Conditional performance in a function refers to the ability of a function to evaluate a certain condition and perform a specific task or procedure based on the result of that condition.

2. How do you conditionally perform a procedure inside a function?

To conditionally perform a procedure inside a function, you can use an if statement or a switch statement to check the condition and execute the procedure if the condition is met.

3. What are the benefits of using conditional performance in a function?

Conditional performance in a function allows for more flexibility and control in the execution of code. It also makes the code more efficient by only executing certain procedures when necessary.

4. Can you provide an example of conditionally performing a procedure inside a function?

Yes, here is an example of a function that checks if a number is greater than 10 and prints a message if it is:

function checkNumber(num) {
    if (num > 10) {
        console.log("The number is greater than 10.");
    }
    }

5. Are there any limitations to using conditional performance in a function?

One limitation of conditional performance in a function is that it can make the code more complex and difficult to read if there are too many nested conditions. It is important to use it sparingly and to consider alternative approaches if the code becomes too complicated.

Similar threads

  • Programming and Computer Science
Replies
15
Views
1K
  • Programming and Computer Science
Replies
32
Views
2K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
5
Views
2K
  • Programming and Computer Science
Replies
3
Views
980
  • Programming and Computer Science
Replies
6
Views
8K
  • Programming and Computer Science
Replies
9
Views
1K
  • Programming and Computer Science
Replies
3
Views
267
  • Programming and Computer Science
Replies
5
Views
9K
Replies
5
Views
685
Back
Top