Using a length() function in a loop's condition

In summary: What they mean is that in 99.999% of your code, worrying about things like this is of no practical significance. Making the code run 0.001% faster but increasing the chance of a bug by 0.01% is a bad deal. The more times you do that, the more likely you are to lose. Premature optimization is the root of all evil (or at least most of it) in programming. Donald Knuth in his 1974 Turing Award lecture, Computer Programming as an Art.Suppose you have a program that meets its timing requirements by a wide margin. Do you care that some little bit of code could have been written more efficiently? You shouldn't. Premature optimization is the root of all
  • #1
Jamin2112
986
12
I see on StackOverflow people doing stuff like


for (int k = 0; k < something.length(); k++)


where the body of the for loop never has any chance of modify something's length. I thought that was considered bad coding practice. The implementation of something's class might not return a private variable, instead might calculate the length on-call. Even if it did return a private variable, that's a couple nanoseconds of overhead. I guess it's considered ok coding practice to do stuff like that? I mean, it does look better than

for (int k = 0, len = something.length(); k < len; k++)

I've also seen people on StackOverflow say "Be careful of micro optimization." Can something explain what they mean by this?
 
Technology news on Phys.org
  • #2
Jamin2112 said:
I see on StackOverflow people doing stuff like


for (int k = 0; k < something.length(); k++)


where the body of the for loop never has any chance of modify something's length. I thought that was considered bad coding practice. The implementation of something's class might not return a private variable, instead might calculate the length on-call. Even if it did return a private variable, that's a couple nanoseconds of overhead. I guess it's considered ok coding practice to do stuff like that? I mean, it does look better than

for (int k = 0, len = something.length(); k < len; k++)

I've also seen people on StackOverflow say "Be careful of micro optimization." Can something explain what they mean by this?

I would lean toward writing the loop like this:
Code:
len = something.length();
for (int k = 0; k < len; k++)
{
   ...
}
My reasoning is that having the expression - len = something.length() - as the test expression in the loop requires a call to the length() function for each loop iteration. From that perspective, your second example is nearly identical to what I propose. The initialization expressions in your second example (k = 0 and len = something.length() ) have to be evaluated only once, before the first iteration of the loop.

As far a "be careful of micro optimization" -- compilers are generally pretty smart these days. A compiler might recognize that the value returned by length() isn't going to change, and so might substitute a constant for this length in the code that it generates. In that case, the effort I put into optimization might not produce any discernable results.
 
  • #3
Jamin2112 said:
I've also seen people on StackOverflow say "Be careful of micro optimization." Can something explain what they mean by this?

What they mean is that in 99.999% of your code, worrying about things like this is of no practical significance. Making the code run 0.001% faster but increasing the chance of a bug by 0.01% is a bad deal. The more times you do that, the more likely you are to lose.

Next week (or next year) you or somebody else will change their code so the length of "something" isn't constant any more. They might not even know that the code they are working on was called from inside this particular loop. Goodbye optimization, hello bug...
 
  • #4
Code:
for (int k = something.length();k >= 0;k--)
 
  • #5
AlephZero said:
What they mean is that in 99.999% of your code, worrying about things like this is of no practical significance.

In college, if a homework grader was in a bad mood or didn't like a student, they might decide to mark them down for having it in their code though
 
  • #6
Jamin2112 said:
I see on StackOverflow people doing stuff like


for (int k = 0; k < something.length(); k++)


where the body of the for loop never has any chance of modify something's length. I thought that was considered bad coding practice. The implementation of something's class might not return a private variable, instead might calculate the length on-call. Even if it did return a private variable, that's a couple nanoseconds of overhead. I guess it's considered ok coding practice to do stuff like that?
In general, yes. If you are programing in C++, it's standard to use an optimizing compiler to compile your code. Modern optimizers can be *very* smart about pulling computations out of the loop.

I've also seen people on StackOverflow say "Be careful of micro optimization." Can something explain what they mean by this?
Premature optimization is the root of all evil (or at least most of it) in programming. Donald Knuth in his 1974 Turing Award lecture, Computer Programming as an Art.

Suppose you have a program that meets its timing requirements by a wide margin. Do you care that some little bit of code could have been written more efficiently? You shouldn't.

Suppose you have a program that doesn't meet its timing requirements. Are you going to fix every little thing that might hint at poor performance? You shouldn't. You should instead profile the program and see why it isn't working as required. Most likely, the problem is because
  • Some fool of a manager who remembers hearing that an optimizing compiler caused a program to crash <something> back in the 1969 has banned the use of optimizing compilers.

    The proper response: Get another job.
  • Your program is spending too much time I/O suspended. Tuning your CPU-bound computations doesn't make a bit of sense if this is the problem.

    The proper response: Fix the I/O problem.
  • Some fool of a programmer used an O(N3) algorithm in a tightly bound loop when there is an O(N*log(N)) algorithm that accomplishes the same task. Once again, tuning your CPU-bound computations won't help.

    The proper response: Fix that algorithm.
  • A tight loop comprising ten lines of code out of a 100 KSLOC program is found to be responsible for 90% or more of the CPU usage. Fine tuning the 99.99% of the code that aren't a part of this tight loop won't help solve this problem. Fine tuning the ten lines of code that are responsible for draining the CPU will.

    The proper response: Tune those ten lines. That those ten lines of code change from a nicely readable chunk of code into nightmarish unmaintainable gobbledygook is OK in this case because profiling has demonstratively shown that this code is problematic.
Jamin2112 said:
In college, if a homework grader was in a bad mood or didn't like a student, they might decide to mark them down for having it in their code though
You're out of college now, so why do you care what someone dinged you for in the past?
 
  • #7
It's still good to notice these things I think. This one is quite trivial, but they aren't all trivial and the more you spot them the more you spot them.
 
  • #8
I would favor:
Code:
for (int k = 0, len = something.length(); k < len; k++)
or the equivalent
Code:
len = something.length;
for (int k = 0; k < len; k++)
.
Here are my points:
1) I certainly wouldn't get bogged down with optimization, but I wouldn't code without regard for it either.
In the situation you've described, where "length" isn't simply returning a private integer member, it is often computed in a loop itself. So why use a Big O of N^2 when N is so readily available.
2) By retrieving the value once, you are making it clear that it does not change.
3) Because the compile and link phases are usually not integrated for optimization, you should not expect this to be optimized for you. If the "length" function has been declared "const", then there are conditions where the compiler will be able to deduce that it is not changing. But it is easier to take the len assignment out of the loop than it is to determine whether the compiler will be able to do it - and will continue to be able to do it in future revisions.

And if the body of the for loop doesn't use "k", you can go with Borek's advice.

However, when length is more complicated than returning a member variable, there is usually a way of detecting when you've reach the end without using the length function. For example:
Code:
for (pObj=pFirst;pObj;pObj=pObj->Next) ...
or
Code:
for(k=0;!(obj=array[k]).bEndData);k++) ...
 
  • #9
Time spent doing the kind of optimimization suggested in the second post to this article is no longer necessary, as all the compilers I know of are now smart enough to make the length function call only once. IMO, code should usually be written for best human readability, as long-term maintenance will end up being a very high cost if human readability is sacrificed.

In situations where performance is paramount, the overall design should look for optimizations, rather than at code-level. And if code itself must be "optimized", conventional wisdom is to make the code work and highly readable first, and then optimize working code (incrementally) at the end. This is because trying to optimize code for performance from the get-go tends to result in obscurity.
 
  • #10
harborsparrow said:
Time spent doing the kind of optimimization suggested in the second post to this article is no longer necessary, as all the compilers I know of are now smart enough to make the length function call only once. IMO, code should usually be written for best human readability, as long-term maintenance will end up being a very high cost if human readability is sacrificed.

In situations where performance is paramount, the overall design should look for optimizations, rather than at code-level. And if code itself must be "optimized", conventional wisdom is to make the code work and highly readable first, and then optimize working code (incrementally) at the end. This is because trying to optimize code for performance from the get-go tends to result in obscurity.
If you write a length function and don't declare it "const", the compiler will be unable to optimize it to one call.
 
  • #11
.Scott said:
If you write a length function and don't declare it "const", the compiler will be unable to optimize it to one call.
That's not necessarily true. Compilers are very good at spotting invariants, particularly if there's no aliasing. It's aliasing rather than non-constness that is the bugaboo of compiler theory. The compiler can typically tell whether some non-const variable isn't being modified. What kills things is when you have two pointers to the same chunk of memory.

For example, memcpy(ptr+1,ptr,N) is a nifty way to propagate a value throughout an array on some computers. On other computers, who knows what will happen. The specification for memcpy explicitly allows implementers to assume that the chunks of memory pointed to by the source and destination arguments aren't aliased (i.e., that they don't overlap).

The potential for aliasing is what makes C and C++ compilers generate slow code compared to Fortran. In C, you can qualify pointer arguments with the restrict and voila! your code is Fortran-like in speed. The restrict keyword hasn't made it to C++ yet, at least not officially. Several C++ compilers do provide an equivalent to the C restrict keyword.
 
Last edited:
  • #12
D H said:
That's not necessarily true. Compilers are very good at spotting invariants, particularly if there's no aliasing. It's aliasing rather than non-constness that is the bugaboo of compiler theory. The compiler can typically tell whether some non-const variable isn't being modified.
At compile time, you might have a top.cpp, top.h, and subs.h. The subs.h will declare a class and a member function "int CSubs::length()", but will not define it. Based on that, the compiler cannot optimize - because it does not know what "int CSubs::length()" does. In fact, at that point subs.cpp may not even be written. For example, if the definition of "int CSubs::length()" has debug statements in it that report each time it is called, then optimizing top.cpp by reducing those calls will result in wrong (but revealing) debug message output.
 
  • #13
.Scott said:
At compile time, you might have a top.cpp, top.h, and subs.h. The subs.h will declare a class and a member function "int CSubs::length()", but will not define it. Based on that, the compiler cannot optimize - because it does not know what "int CSubs::length()" does.
First off, there's a simple solution to this problem, the same solution to the "Doctor, it hurts when I do this" problem. Don't do that! It's a good idea to define CSubs::length() inline in the header if the function is small and has negligible fanout.

Secondly, modern compiler/linkers can optimize away that call to CSubs::length() at link time. The Microsoft, Intel, GNU, and LLVM suites all support link-time optimization. Note that I wrote "compiler/linker." Compiler theory has revisited the concept that the sole job of the linker is to resolve external references. This leaves out key optimizations such as inlining the hidden implementation of CSubs::length(). Modern compiler/linkers can take advantage of the fact that the implementation is known at link time and rewrite the object code produced at compile time.


In fact, at that point subs.cpp may not even be written. For example, if the definition of "int CSubs::length()" has debug statements in it that report each time it is called, then optimizing top.cpp by reducing those calls will result in wrong (but revealing) debug message output.
You have to be careful here. The C++ standard explicitly allows compiler writers to optimize away some function calls even if they have side effects. In particular, if you put debug messages in your constructors and destructors you might well see unexpected results. It is your expectations that are buggy, not the compiler. Many C++ programs would run considerably slower were it not for return value optimization and copy elision.
 
  • #14
D H said:
Compiler theory has revisited the concept that the sole job of the linker is to resolve external references. This leaves out key optimizations such as inlining the hidden implementation of CSubs::length(). Modern compiler/linkers can take advantage of the fact that the implementation is known at link time and rewrite the object code produced at compile time.

The basic idea isn't particularly modern, though the automatic implementation of it is. Back in the days of the early Cray supercomputers (c. 1980) the machine architecture was very inefficient at handling function calls, and compiler directives were often used to tell the compiler to expand functions inline at the source code level (renaming local variables to avoid name clashes) and then globally optimize and vectorize the resulting single large function. That sometimes made debugging "fun", when dealing with a single expanded function containing maybe 10,000 lines of code!
 
  • #15
D H said:
First off, there's a simple solution to this problem, the same solution to the "Doctor, it hurts when I do this" problem. Don't do that! It's a good idea to define CSubs::length() inline in the header if the function is small and has negligible fanout.
My point was that you cannot presume it is optimized. My suggestion for "not doing that" was (and is) to declare length() as const. That is a more general solution - one that will work even if length() is in a dynamically linked library.
D H said:
Secondly, modern compiler/linkers can optimize away that call to CSubs::length() at link time. The Microsoft, Intel, GNU, and LLVM suites all support link-time optimization. Note that I wrote "compiler/linker." Compiler theory has revisited the concept that the sole job of the linker is to resolve external references. This leaves out key optimizations such as inlining the hidden implementation of CSubs::length(). Modern compiler/linkers can take advantage of the fact that the implementation is known at link time and rewrite the object code produced at compile time.
I happen to have Visual Studio 2008 open in front of me now.
Compiling this:
Code:
{
  CString s = "123123123";
  int nn = 0;
  for(int n=0;n<s.GetLength();n++) nn++;
  s.Format("%2.2d",nn);
}
Ideally, the compiler could have optimized out this entire block of code - since it has no affect on the world. However, although GetLength() is declared const, Format(...) is not. So the only optimization other than register use was move GetLength() out of the loop.
Clearly compiler/build theory allows for greater automatic optimization, but as a matter of practicality, much of the optimization is still up to the programmer.

D H said:
You have to be careful here. The C++ standard explicitly allows compiler writers to optimize away some function calls even if they have side effects. In particular, if you put debug messages in your constructors and destructors you might well see unexpected results. It is your expectations that are buggy, not the compiler. Many C++ programs would run considerably slower were it not for return value optimization and copy elision.
In what standard (and in what compiler/linker implementations) does it allow constructor/destructor optimization to emit side effects? And how would a compiler know what side effects are discardable and which are not? If I put a debug statement into a function, ctor or not, I expect that any variables that I am reporting to become undiscardable - so optimizations that were possible before would become become inapplicable.

BTW: Here is (some of) the code that resulted from the optimized compile. The lines that actually constructed the string and retrieved the length were move toward the top of a function I'm working on - so they don't appear here:
Code:
  //
  // Temporary experiment.
  {
    CString s = "123123123";
    int nn = 0;

    for(int n=0;n<s.GetLength();n++) nn++;
00393254  add         ecx,edi 
00393256  add         edx,edi 
00393258  cmp         ecx,eax 
0039325A  jl          00393254 
    s.Format("%2.2d",nn);
0039325C  push        edx  
0039325D  lea         ecx,[esp+20h] 
00393261  push        offset string "%2.2d" (395A9Ch) 
00393266  push        ecx  
00393267  call        dword ptr [__imp_ATL::CStringT<char,StrTraitMFC_DLL<char,ATL::ChTraitsCRT<char> > >::Format (395248h)] 
0039326D  add         esp,0Ch 
  }
 
Last edited:
  • #16
.Scott said:
In what standard (and in what compiler/linker implementations) does it allow constructor/destructor optimization to emit side effects? And how would a compiler know what side effects are discardable and which are not? If I put a debug statement into a function, ctor or not, I expect that any variables that I am reporting to become undiscardable - so optimizations that were possible before would become become inapplicable.
Section 12.8, paragraph 31 of the standard.
When certain criteria are met, an implementation is allowed to omit the copy/move construction of a class object, even if the copy/move constructor and/or destructor for the object have side effects. In such cases, the implementation treats the source and target of the omitted copy/move operation as simply two different ways of referring to the same object, and the destruction of that object occurs at the later of the times when the two objects would have been destroyed without the optimization. This elision of copy/move operations, called copy elision, is permitted in the following circumstances (which may be combined to eliminate multiple copies):
  • in a return statement in a function with a class return type, when the expression is the name of a non-volatile automatic object (other than a function or catch-clause parameter) with the same cv-unqualified type as the function return type, the copy/move operation can be omitted by constructing the automatic object directly into the function’s return value
  • in a throw-expression, when the operand is the name of a non-volatile automatic object (other than a function or catch-clause parameter) whose scope does not extend beyond the end of the innermost enclosing try-block (if there is one), the copy/move operation from the operand to the exception object (15.1) can be omitted by constructing the automatic object directly into the exception object
  • when a temporary class object that has not been bound to a reference (12.2) would be copied/moved to a class object with the same cv-unqualified type, the copy/move operation can be omitted by constructing the temporary object directly into the target of the omitted copy/move
  • when the exception-declaration of an exception handler (Clause 15) declares an object of the same type (except for cv-qualification) as the exception object (15.1), the copy/move operation can be omitted by treating the exception-declaration as an alias for the exception object if the meaning of the program will be unchanged except for the execution of constructors and destructors for the object declared by the exception-declaration.
 
  • Like
Likes 1 person
  • #17
D H said:
Section 12.8, paragraph 31 of the standard.
When certain criteria are met, an implementation is allowed to omit the copy/move construction of a class object, even if the copy/move constructor and/or destructor for the object have side effects. In such cases, the implementation treats the source and target of the omitted copy/move operation as simply two different ways of referring to the same object, and the destruction of that object occurs at the later of the times when the two objects would have been destroyed without the optimization. This elision of copy/move operations, called copy elision, is permitted in the following circumstances (which may be combined to eliminate multiple copies)...
Given that most programmers don't even know that constructors are called in most of these copy/move situation, that's a safe optimization. Those of us who do know, are alert enough to know that there is a potential optimization.
BTW: Although some of the compilers I've worked with might have that optimization, I've never knowingly run into it.​
 
  • #18
.Scott said:
Although some of the compilers I've worked with might have that optimization, I've never knowingly run into it.
gcc and clang potentially do this optimization at optimization level zero. In other words, always. (I don't use the Microsoft or Intel compilers, so I don't know when RVO kicks in for those compilers.)

Consider the following:
Code:
return_type some_function () {
   return_type return_value;
   …
   if (some_condition) {
      return_value = something;
      return return_value;
   }
   …
   return_value = something_else;
   …
   return return_value;
}
Both gcc and clang will use RVO, even at optimization level zero, if:
- The very first statement in the function is a declaration of the form return_type return_value; and
- All of the return statements in the function return that variable.

This is a freebie optimization if you follow the single point of entry / single point of return paradigm. Simply declare the return value at the very first line within the function and return that variable at the very last line of the function.
 
Last edited:
  • #19
D H said:
Both gcc and clang will use RVO, even at optimization level zero, if:
- The very first statement in the function is a declaration of the form return_type return_value; and
- All of the return statements in the function return that variable.
The key is that it's a compile-time optimization. I don't know of any C++ development environments that provide compile/linker optimizations other than dropping unused resources from the executable.
 

1. How does the length() function work in a loop's condition?

The length() function is used to find the length of a string or an array. In a loop's condition, the length() function is used to determine the number of iterations the loop should run based on the length of the string or array.

2. Can the length() function be used with different data types in a loop's condition?

Yes, the length() function can be used with strings, arrays, and other data types that have a length property. However, it may not work as intended with all data types, so it's important to test and make sure it's functioning correctly.

3. What happens if the length() function returns 0 in a loop's condition?

If the length() function returns 0, it means that the string or array has no elements. This could result in the loop not executing at all, depending on the loop's condition. It's important to consider this possibility when using the length() function in a loop's condition.

4. Can the length() function be used to modify the loop's condition?

Yes, the length() function can be used to modify the loop's condition. For example, you can use it as part of a comparison operator to check if the loop has reached a certain length, or if it should continue running until the length of the string or array is met.

5. Are there any alternative functions to the length() function for determining the length of a string or array in a loop's condition?

Yes, there are alternative functions such as size() and count() that can be used to determine the length of a string or array in a loop's condition. However, the specific function to use may depend on the programming language you are using, so it's important to consult the documentation for the correct function to use.

Similar threads

  • Programming and Computer Science
2
Replies
55
Views
4K
  • Programming and Computer Science
Replies
8
Views
793
  • Programming and Computer Science
Replies
17
Views
2K
  • Programming and Computer Science
Replies
16
Views
4K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
11
Views
1K
Replies
9
Views
1K
  • Programming and Computer Science
Replies
5
Views
883
  • Programming and Computer Science
2
Replies
66
Views
4K
  • Programming and Computer Science
Replies
1
Views
1K
Back
Top