Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Using a length() function in a loop's condition

  1. Mar 17, 2014 #1
    I see on StackOverflow people doing stuff like


    for (int k = 0; k < something.length(); k++)


    where the body of the for loop never has any chance of modify something's length. I thought that was considered bad coding practice. The implementation of something's class might not return a private variable, instead might calculate the length on-call. Even if it did return a private variable, that's a couple nanoseconds of overhead. I guess it's considered ok coding practice to do stuff like that? I mean, it does look better than

    for (int k = 0, len = something.length(); k < len; k++)

    I've also seen people on StackOverflow say "Be careful of micro optimization." Can something explain what they mean by this?
     
  2. jcsd
  3. Mar 17, 2014 #2

    Mark44

    Staff: Mentor

    I would lean toward writing the loop like this:
    Code (Text):

    len = something.length();
    for (int k = 0; k < len; k++)
    {
       ...
    }
     
    My reasoning is that having the expression - len = something.length() - as the test expression in the loop requires a call to the length() function for each loop iteration. From that perspective, your second example is nearly identical to what I propose. The initialization expressions in your second example (k = 0 and len = something.length() ) have to be evaluated only once, before the first iteration of the loop.

    As far a "be careful of micro optimization" -- compilers are generally pretty smart these days. A compiler might recognize that the value returned by length() isn't going to change, and so might substitute a constant for this length in the code that it generates. In that case, the effort I put into optimization might not produce any discernable results.
     
  4. Mar 17, 2014 #3

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    What they mean is that in 99.999% of your code, worrying about things like this is of no practical significance. Making the code run 0.001% faster but increasing the chance of a bug by 0.01% is a bad deal. The more times you do that, the more likely you are to lose.

    Next week (or next year) you or somebody else will change their code so the length of "something" isn't constant any more. They might not even know that the code they are working on was called from inside this particular loop. Goodbye optimization, hello bug....
     
  5. Mar 17, 2014 #4

    Borek

    User Avatar

    Staff: Mentor

    Code (Text):
    for (int k = something.length();k >= 0;k--)
     
  6. Mar 17, 2014 #5
    In college, if a homework grader was in a bad mood or didn't like a student, they might decide to mark them down for having it in their code though
     
  7. Mar 17, 2014 #6

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    In general, yes. If you are programing in C++, it's standard to use an optimizing compiler to compile your code. Modern optimizers can be *very* smart about pulling computations out of the loop.

    Premature optimization is the root of all evil (or at least most of it) in programming. Donald Knuth in his 1974 Turing Award lecture, Computer Programming as an Art.

    Suppose you have a program that meets its timing requirements by a wide margin. Do you care that some little bit of code could have been written more efficiently? You shouldn't.

    Suppose you have a program that doesn't meet its timing requirements. Are you going to fix every little thing that might hint at poor performance? You shouldn't. You should instead profile the program and see why it isn't working as required. Most likely, the problem is because
    • Some fool of a manager who remembers hearing that an optimizing compiler caused a program to crash <something> back in the 1969 has banned the use of optimizing compilers.

      The proper response: Get another job.

    • Your program is spending too much time I/O suspended. Tuning your CPU-bound computations doesn't make a bit of sense if this is the problem.

      The proper response: Fix the I/O problem.

    • Some fool of a programmer used an O(N3) algorithm in a tightly bound loop when there is an O(N*log(N)) algorithm that accomplishes the same task. Once again, tuning your CPU-bound computations won't help.

      The proper response: Fix that algorithm.

    • A tight loop comprising ten lines of code out of a 100 KSLOC program is found to be responsible for 90% or more of the CPU usage. Fine tuning the 99.99% of the code that aren't a part of this tight loop won't help solve this problem. Fine tuning the ten lines of code that are responsible for draining the CPU will.

      The proper response: Tune those ten lines. That those ten lines of code change from a nicely readable chunk of code into nightmarish unmaintainable gobbledygook is OK in this case because profiling has demonstratively shown that this code is problematic.


    You're out of college now, so why do you care what someone dinged you for in the past?
     
  8. Mar 21, 2014 #7
    It's still good to notice these things I think. This one is quite trivial, but they aren't all trivial and the more you spot them the more you spot them.
     
  9. Mar 22, 2014 #8
    I would favor:
    Code (Text):
    for (int k = 0, len = something.length(); k < len; k++)
    or the equivalent
    Code (Text):
    len = something.length;
    for (int k = 0; k < len; k++)
    .
    Here are my points:
    1) I certainly wouldn't get bogged down with optimization, but I wouldn't code without regard for it either.
    In the situation you've described, where "length" isn't simply returning a private integer member, it is often computed in a loop itself. So why use a Big O of N^2 when N is so readily available.
    2) By retrieving the value once, you are making it clear that it does not change.
    3) Because the compile and link phases are usually not integrated for optimization, you should not expect this to be optimized for you. If the "length" function has been declared "const", then there are conditions where the compiler will be able to deduce that it is not changing. But it is easier to take the len assignment out of the loop than it is to determine whether the compiler will be able to do it - and will continue to be able to do it in future revisions.

    And if the body of the for loop doesn't use "k", you can go with Borek's advice.

    However, when length is more complicated than returning a member variable, there is usually a way of detecting when you've reach the end without using the length function. For example:
    Code (Text):
    for (pObj=pFirst;pObj;pObj=pObj->Next) ...
    or
    Code (Text):
    for(k=0;!(obj=array[k]).bEndData);k++) ...
     
  10. Mar 27, 2014 #9

    harborsparrow

    User Avatar
    Gold Member

    Time spent doing the kind of optimimization suggested in the second post to this article is no longer necessary, as all the compilers I know of are now smart enough to make the length function call only once. IMO, code should usually be written for best human readability, as long-term maintenance will end up being a very high cost if human readability is sacrificed.

    In situations where performance is paramount, the overall design should look for optimizations, rather than at code-level. And if code itself must be "optimized", conventional wisdom is to make the code work and highly readable first, and then optimize working code (incrementally) at the end. This is because trying to optimize code for performance from the get-go tends to result in obscurity.
     
  11. Mar 27, 2014 #10
    If you write a length function and don't declare it "const", the compiler will be unable to optimize it to one call.
     
  12. Mar 27, 2014 #11

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    That's not necessarily true. Compilers are very good at spotting invariants, particularly if there's no aliasing. It's aliasing rather than non-constness that is the bugaboo of compiler theory. The compiler can typically tell whether some non-const variable isn't being modified. What kills things is when you have two pointers to the same chunk of memory.

    For example, memcpy(ptr+1,ptr,N) is a nifty way to propagate a value throughout an array on some computers. On other computers, who knows what will happen. The specification for memcpy explicitly allows implementers to assume that the chunks of memory pointed to by the source and destination arguments aren't aliased (i.e., that they don't overlap).

    The potential for aliasing is what makes C and C++ compilers generate slow code compared to Fortran. In C, you can qualify pointer arguments with the restrict and voila! your code is Fortran-like in speed. The restrict keyword hasn't made it to C++ yet, at least not officially. Several C++ compilers do provide an equivalent to the C restrict keyword.
     
    Last edited: Mar 27, 2014
  13. Mar 28, 2014 #12
    At compile time, you might have a top.cpp, top.h, and subs.h. The subs.h will declare a class and a member function "int CSubs::length()", but will not define it. Based on that, the compiler cannot optimize - because it does not know what "int CSubs::length()" does. In fact, at that point subs.cpp may not even be written. For example, if the definition of "int CSubs::length()" has debug statements in it that report each time it is called, then optimizing top.cpp by reducing those calls will result in wrong (but revealing) debug message output.
     
  14. Mar 28, 2014 #13

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    First off, there's a simple solution to this problem, the same solution to the "Doctor, it hurts when I do this" problem. Don't do that! It's a good idea to define CSubs::length() inline in the header if the function is small and has negligible fanout.

    Secondly, modern compiler/linkers can optimize away that call to CSubs::length() at link time. The Microsoft, Intel, GNU, and LLVM suites all support link-time optimization. Note that I wrote "compiler/linker." Compiler theory has revisited the concept that the sole job of the linker is to resolve external references. This leaves out key optimizations such as inlining the hidden implementation of CSubs::length(). Modern compiler/linkers can take advantage of the fact that the implementation is known at link time and rewrite the object code produced at compile time.


    You have to be careful here. The C++ standard explicitly allows compiler writers to optimize away some function calls even if they have side effects. In particular, if you put debug messages in your constructors and destructors you might well see unexpected results. It is your expectations that are buggy, not the compiler. Many C++ programs would run considerably slower were it not for return value optimization and copy elision.
     
  15. Mar 28, 2014 #14

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    The basic idea isn't particularly modern, though the automatic implementation of it is. Back in the days of the early Cray supercomputers (c. 1980) the machine architecture was very inefficient at handling function calls, and compiler directives were often used to tell the compiler to expand functions inline at the source code level (renaming local variables to avoid name clashes) and then globally optimize and vectorize the resulting single large function. That sometimes made debugging "fun", when dealing with a single expanded function containing maybe 10,000 lines of code!
     
  16. Mar 28, 2014 #15
    My point was that you cannot presume it is optimized. My suggestion for "not doing that" was (and is) to declare length() as const. That is a more general solution - one that will work even if length() is in a dynamically linked library.
    I happen to have Visual Studio 2008 open in front of me now.
    Compiling this:
    Code (Text):
    {
      CString s = "123123123";
      int nn = 0;
      for(int n=0;n<s.GetLength();n++) nn++;
      s.Format("%2.2d",nn);
    }
    Ideally, the compiler could have optimized out this entire block of code - since it has no affect on the world. However, although GetLength() is declared const, Format(...) is not. So the only optimization other than register use was move GetLength() out of the loop.
    Clearly compiler/build theory allows for greater automatic optimization, but as a matter of practicality, much of the optimization is still up to the programmer.

    In what standard (and in what compiler/linker implementations) does it allow constructor/destructor optimization to emit side effects? And how would a compiler know what side effects are discardable and which are not? If I put a debug statement into a function, ctor or not, I expect that any variables that I am reporting to become undiscardable - so optimizations that were possible before would become become inapplicable.

    BTW: Here is (some of) the code that resulted from the optimized compile. The lines that actually constructed the string and retrieved the length were move toward the top of a function I'm working on - so they don't appear here:
    Code (Text):

      //
      // Temporary experiment.
      {
        CString s = "123123123";
        int nn = 0;

        for(int n=0;n<s.GetLength();n++) nn++;
    00393254  add         ecx,edi
    00393256  add         edx,edi
    00393258  cmp         ecx,eax
    0039325A  jl          00393254
        s.Format("%2.2d",nn);
    0039325C  push        edx  
    0039325D  lea         ecx,[esp+20h]
    00393261  push        offset string "%2.2d" (395A9Ch)
    00393266  push        ecx  
    00393267  call        dword ptr [__imp_ATL::CStringT<char,StrTraitMFC_DLL<char,ATL::ChTraitsCRT<char> > >::Format (395248h)]
    0039326D  add         esp,0Ch
      }
     
     
    Last edited: Mar 28, 2014
  17. Mar 28, 2014 #16

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Section 12.8, paragraph 31 of the standard.
    When certain criteria are met, an implementation is allowed to omit the copy/move construction of a class object, even if the copy/move constructor and/or destructor for the object have side effects. In such cases, the implementation treats the source and target of the omitted copy/move operation as simply two different ways of referring to the same object, and the destruction of that object occurs at the later of the times when the two objects would have been destroyed without the optimization. This elision of copy/move operations, called copy elision, is permitted in the following circumstances (which may be combined to eliminate multiple copies):
    • in a return statement in a function with a class return type, when the expression is the name of a non-volatile automatic object (other than a function or catch-clause parameter) with the same cv-unqualified type as the function return type, the copy/move operation can be omitted by constructing the automatic object directly into the function’s return value

    • in a throw-expression, when the operand is the name of a non-volatile automatic object (other than a function or catch-clause parameter) whose scope does not extend beyond the end of the innermost enclosing try-block (if there is one), the copy/move operation from the operand to the exception object (15.1) can be omitted by constructing the automatic object directly into the exception object

    • when a temporary class object that has not been bound to a reference (12.2) would be copied/moved to a class object with the same cv-unqualified type, the copy/move operation can be omitted by constructing the temporary object directly into the target of the omitted copy/move

    • when the exception-declaration of an exception handler (Clause 15) declares an object of the same type (except for cv-qualification) as the exception object (15.1), the copy/move operation can be omitted by treating the exception-declaration as an alias for the exception object if the meaning of the program will be unchanged except for the execution of constructors and destructors for the object declared by the exception-declaration.
     
  18. Mar 28, 2014 #17
    Given that most programmers don't even know that constructors are called in most of these copy/move situation, that's a safe optimization. Those of us who do know, are alert enough to know that there is a potential optimization.
    BTW: Although some of the compilers I've worked with might have that optimization, I've never knowingly run into it.​
     
  19. Mar 28, 2014 #18

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    gcc and clang potentially do this optimization at optimization level zero. In other words, always. (I don't use the Microsoft or Intel compilers, so I don't know when RVO kicks in for those compilers.)

    Consider the following:
    Code (Text):

    return_type some_function () {
       return_type return_value;
       …
       if (some_condition) {
          return_value = something;
          return return_value;
       }
       …
       return_value = something_else;
       …
       return return_value;
    }
    Both gcc and clang will use RVO, even at optimization level zero, if:
    - The very first statement in the function is a declaration of the form return_type return_value; and
    - All of the return statements in the function return that variable.

    This is a freebie optimization if you follow the single point of entry / single point of return paradigm. Simply declare the return value at the very first line within the function and return that variable at the very last line of the function.
     
    Last edited: Mar 28, 2014
  20. Mar 28, 2014 #19
    The key is that it's a compile-time optimization. I don't know of any C++ development environments that provide compile/linker optimizations other than dropping unused resources from the executable.
     
  21. Apr 22, 2014 #20
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Using a length() function in a loop's condition
Loading...