C/C++ [C++] Never have more than 1 return statement?

  • Thread starter Thread starter Jamin2112
  • Start date Start date
Click For Summary
The discussion centers on the debate over whether a function in C++ should have only one return statement. Some argue that having multiple return points can enhance readability and simplify code, especially in cases involving preconditions or error handling. Others advocate for a single exit point to improve debugging and maintainability, citing traditional coding standards and practices. The conversation highlights the tension between modern coding practices and older rules, with some participants suggesting that the single return point rule is outdated. Ultimately, the effectiveness of return statements depends on the context and complexity of the function being written.
  • #31
Mark44 said:
That's not in dispute. What this thread was about was whether it was good practice to have exactly one return statement in a function. As D H has said, this "rule" has become something of a religious controversy, akin to the controversy about where the opening brace for a for loop, while, loop, if block, etc. should go: on the same line or on a new line.

There is certainly some dispute over the former issue.

As for the latter, your editor or IDE can automatically reformat code when you open it.

I am much more interested in the GOTO issue that was mentioned earlier in the thread. It's actually standard practice in quite a lot of places.

I do not think it's religion. It can simply be a case of insufficient self-introspection. A programmer assumes that his own field of programming as practiced by his cohorts represents the sum total of all programming and doesn't realize other subgroups are working in very different environments with different idioms.
 
Technology news on Phys.org
  • #32
I agree with most of the posts in this thread. IMO, it's completely pointless to force a single return when many would simplify the code. But I have a professor that enforces the single return rule on all of his assignments. I've mentioned to him that I think the rule is absurd (it's a small department and I know him well) and he responded by telling me that it messes up a lot of invariants necessary for a lot of optimizations. He did research in compilers at a upper-middle level school during his PhD, so he has the background to know this with certainty. I've never pressed him for what exactly it is that the multiple returns messes up, as it would likely be an involved discussion I wouldn't completely understand anyway (thus doing so would show inconsideration for the value of his time). I know this doesn't really mean much because I can't substantiate anything, but it might be worth considering. Does anyone here have any idea what he could mean?

PS Even knowing this, I still use multiple returns, mainly because I have a policy against coding for optimizations. I code algorithms. If compilers can't generate efficient code for an efficient algorithm, IMHO, that's the compiler writers' problem and not worth my effort unless it's causing me performance problems in a hot segment of code.
 
  • #33
TylerH said:
I agree with most of the posts in this thread. IMO, it's completely pointless to force a single return when many would simplify the code. But I have a professor that enforces the single return rule on all of his assignments. I've mentioned to him that I think the rule is absurd (it's a small department and I know him well) and he responded by telling me that it messes up a lot of invariants necessary for a lot of optimizations. He did research in compilers at a upper-middle level school during his PhD, so he has the background to know this with certainty. I've never pressed him for what exactly it is that the multiple returns messes up, as it would likely be an involved discussion I wouldn't completely understand anyway (thus doing so would show inconsideration for the value of his time). I know this doesn't really mean much because I can't substantiate anything, but it might be worth considering. Does anyone here have any idea what he could mean?

I seriously doubt that he's correct, that compilers don't handle multiple exit points well, but I think I can outline his argument, though it's difficult to explain stuff that is wrong.

Inside the body of a function that isn't to be inlined, the compiler assigns registers to variables. There's a finite number of registers available. If a register already contains a value that is needed after the function call, its contents needs to be stored on the stack, typically at the start of a function and the value needs to be loaded back into the register from the stack before exitting from the function. The stack is typically stored in the fastest memory available, but memory access is often significantly slower than register access.

There's extra complexity, in that different processors give different penalties for failed branch predicition, and in some circumstances, it's better to execute extra instructions and disgard their result in order to remove branches and dodge a branching penalty.

The reason I think it's not relevant, is that it's trivial for a compiler to unite exit points, in the same way that a programmer would. In fact, early exits allow compilers to use less registers, in those instances. Typically, when writing performance critical code, I would tend towards multiple exit points, rather than away from them.

Complier behaviour does vary and early releases of compliers have unexpected behaviours. Typically, compiler optimisation involves analysing the most common use-cases and mutiple exit points is a relatively common pattern. Nevertheless, if you're writing performance critical code, you tend to disassamble it anyway and modify the C++ to coax the compiler into producing the correct code. If it's stubborn, then moving to assembly is the only solution.

TylerH said:
PS Even knowing this, I still use multiple returns, mainly because I have a policy against coding for optimizations. I code algorithms. If compilers can't generate efficient code for an efficient algorithm, IMHO, that's the compiler writers' problem and not worth my effort unless it's causing me performance problems in a hot segment of code.

Whether or not you should optimise for performance, really depends upon the circumtances and there are cases where it's very important that a section of code completes within a fixed time on fixed hardware. Hoping that a new version of a complier is going to turn up before you need to meet that demand isn't a sensible solution. Alternatively, if it doesn't really matter to you how long it takes your code to complete, then you should prioritise other things such as maintainability and minimisation of bugs.

A good programmer, develops a style that practises all these considerations in a particular balance and constantly re-examines their style based upon how they currently percieve the desirable characterisitics of their code.
 
Last edited:
  • #34
gmar said:
There is certainly some dispute over the former issue.

As for the latter, your editor or IDE can automatically reformat code when you open it.
That's problematic when you have to check your code into a revision control system and the project dicta that mandate that all checked-in code must be formatted per a strict set of rules. I've yet to meet an automated reformatting tool that properly converts all the nuances of style A to all the nuances style B.

To me, the easiest way around this problem is to relax those project-wide dicta. This becomes especially important for big projects, those with many tens of thousands lines of code or more. Should there be consistency within a file, or some logical grouping of files? Absolutely. Should there be consistency across a project that comprises a million lines of code? No. Now the project manager is imposing his or her own computing religion.

I am much more interested in the GOTO issue that was mentioned earlier in the thread. It's actually standard practice in quite a lot of places.
The only places where I've seen GOTO used are
  • In ancient code that dates from the 1970s or earlier.
  • In not so ancient code where the programmers saw 1970s era code as a "how to" example of best practices.
  • In some finite state automata, where the natural transition from one state to another is to go to that other state.
  • In organizations that have mandated the single point of entry / single point of return as an anti-pattern.
I have successfully shot down incorporating the single point of entry / single point of return rule into a project's coding standards by innocently asking "so does that mean we can use goto?"

To me, the easiest way around this problem (goto and return) is that code needs to be subject to peer review. It's impossible to specify in rules all of the ways that code can be "wrong". Someone used a O(n3) algorithm where an O(n log(n)) algorithm will do? That code is wrong, but try writing a programming standard that says not to do that. Humans remain the best bug detectors. That's why we review code. Stinky code ("code smell") is another area where standards just don't quite work. We all know stinky code when we get too close to the screen and smell it. Multiple return statements can be a sign of code smell. "Your code stinks" from a peer reviewer is always a relevant comment that needs to be addressed.
TylerH said:
I agree with most of the posts in this thread. IMO, it's completely pointless to force a single return when many would simplify the code. But I have a professor that enforces the single return rule on all of his assignments. I've mentioned to him that I think the rule is absurd (it's a small department and I know him well) and he responded by telling me that it messes up a lot of invariants necessary for a lot of optimizations. He did research in compilers at a upper-middle level school during his PhD, so he has the background to know this with certainty.
That is true to some extent. A return deep inside a multiply-nested loop is almost inevitably going to raise havoc with an optimizer. The same is true for goto and break statements. Those statements mess with the loop invariants, thereby precluding a lot of optimizations. On the other hand, an early return such as a check that the user input mass of an object is positive doesn't mess with those loop invariants. Advanced languages have a concept of preconditions and postconditions. Older languages such as C and C++ do not, and as a result you get design patterns such as early return.

A well-established design pattern in an older language is a built-in feature or perhaps even a no-op or in a more advanced language.
 
  • #35
D H said:
That's problematic when you have to check your code into a revision control system and the project dicta that mandate that all checked-in code must be formatted per a strict set of rules. I've yet to meet an automated reformatting tool that properly converts all the nuances of style A to all the nuances style B.

That's weird since doing a custom extension for an existing extensible code formatter is probably a weekend's work for an intern.

My programming background is embedded systems in C. We do a lot of things the majority of non-realtime coders don't like, but in all honesty, the majority of idiomatic code arguments are all based on soft factors so comparing it to a religion is a pretty good analogy. At least until, for example, we get some really good formal verification tool which chokes on multiple exit points in a function.

Now would probably not be a good time for me to mention the exception vs return code "debate".

I didn't like the tone of my reply, so I edited it.
 
Last edited:
  • #36
D H said:
The only places where I've seen GOTO used are
  • In ancient code that dates from the 1970s or earlier.
  • In not so ancient code where the programmers saw 1970s era code as a "how to" example of best practices.
  • In some finite state automata, where the natural transition from one state to another is to go to that other state.
  • In organizations that have mandated the single point of entry / single point of return as an anti-pattern.
I have successfully shot down incorporating the single point of entry / single point of return rule into a project's coding standards by innocently asking "so does that mean we can use goto?"
I recall seeing goto being used in MS Windows internal OS code, possibly in kernel code, but it's been about 10 or 12 years, so I'm not sure of that. One rationale for its use, I believe, was that if something untoward happened, not as much unwinding of the call stack was needed.
 
  • #37
goto
... previous thread:

https://www.physicsforums.com/showthread.php?t=185816

anti-pattern
I've read the phrase "arrow anti-pattern" more often, usually referring to if else sequences overly nested, spanning over 50 lines of code, resulting in the primary code path of a function being nested 4 levels or more deep, and the else handling code for the first if is at the very end of the nested mess, ... . This is where alternatives like goto or a state type variable are better.
 
  • #38
rcgldr said:
I've read the phrase "arrow anti-pattern" more often, usually referring to if else sequences overly nested, spanning over 50 lines of code, resulting in the primary code path of a function being nested 4 levels or more deep, and the else handling code for the first if is at the very end of the nested mess, ... . This is where alternatives like goto or a state type variable are better.
Or, as I said:
Code:
//
// Non-loop to provide easy breaks.
for(;;) {
  if(...) break;
  ...
  if(...) break;
  ...
  if(...) break;
  ...
  if(...) break;
  ...
  break;
}
 
  • #39
One community's pattern is another community's anti-pattern.

This is quite common in C.

Code:
function allocate_and_process ():

    a = allocate ()
    if test (a) == fail: 
        goto UNDO_A
        
    b = allocate ()
    if test (b) == fail:
        goto UNDO_B
        
    c = allocate ()
    if test (c) == fail:
         goto UNDO_C
         
    test_code = do_stuff (a, b, c)
    return test_code // returns test result code
     
    UNDO_C:
       free (c)
       
    UNDO_B:
        free (b)
        
    UNDO A:
        free (a)
        
    return COULD_NOT_RUN_TEST // returns value showing test could not run

Although it's common in C, if you do it in C++ you (usually!) get slapped. Not only do you have RAII, but it's not exception safe.
 
Last edited:
  • #40
gmar said:
That's weird since doing a custom extension for an existing extensible code formatter is probably a weekend's work for an intern.
It might be a weekend's work for an intern to solve half of the problem. It's the other half of the problem, the nuances, that make this difficult. It's a bit like applying a translator that translates from English to Russian and then another that translates from Russian to English. The results, while amusing, are likely to be incorrect.

This is a big sidebar to this thread on never having more than one return statement, so if you want to continue this discussion it's best to start a new thread.

gmar said:
Now would probably not be a good time for me to mention the exception vs return code "debate".
Probably not. The single versus multiple return statement "debate" is enough programming religion for one thread.

On the other hand, the single point of return rule implicitly precludes throwing an exception. In fact, C++ projects that enforce single point of return also typically ban the use of exceptions. That in turn precludes using things such as C++ containers, and a number of realtime C++ projects do ban their use. Realtime programming is a different world.

gmar said:
One community's pattern is another community's anti-pattern.
Another way to look at it: That a design pattern exists in some language is a sign of a weakness in that language. The call stack is a design pattern in assembly; it's (mostly) invisible in high order languages. The goto exit_point widely used in C to address resource issues is (mostly) obviated in C++ programmers with RAII. Mmany C++ design patterns similarly become invisible or unnecessary in a dynamic functional programming language where classes and functions are first class objects.

This is quite common in C. <typical C function with multiple allocations and multiple gotos elided>
It is indeed very common, particularly in low level C code such as OS kernel code. It's a standard C design pattern to ensure that resources are properly handled. In fact, CERT recommends this style over multiple returns: https://www.securecoding.cert.org/confluence/display/seccode/MEM12-C.+Consider+using+a+goto+chain+when+leaving+a+function+on+error+when+using+and+releasing+resources .

Although it's common in C, if you do it in C++ you (usually!) get slapped. Not only do you have RAII, but it's not exception safe.
There's a lot of C++ code out there that isn't exception safe. Writing exception safe software is doable but is not easy. There's a lot more to it than try and catch. (In fact, one can write exception safe functions that don't use try and catch at all.)

Interestingly, C++11 has deprecated the very idea of listing the exceptions a function can throw. The new concept is the noexcept qualification, which is part of a function's signature. A function qualified as noexcept(false) (or no spec at all) can throw anything. A function qualified as noexcept or noexcept(true) guarantees that it will not throw, even indirectly.
 
Last edited by a moderator:
  • #41
I figure it's about a 99% chance that I'll get flamed for what I'm about to say, but here goes anyway.

I've been writing code--and maintaining old code--for more than 3 decades. My field is computer science, not physics, though I've had to do my share of numerical code. The advice to have a single exit point (for any subroutine larger than a few lines) is sound, based on my experience. More than once, I've needed to install monitors in old code to troubleshoot it. This meant, in some cases, logging exit conditions. That was a lot easier to do when where was a single exit point.

You may think you'll never want to do anything like that, but I submit that you cannot know--if code lives for several years--what is likely to be needed in the future. Hence, the common advice is to use a single exit point in most cases. Note, I acknowledge that there are trivial cases where it would not matter.

You can blow me off as being an old fogie if you like, or cavalierly dismiss this as being "a religious war" in which there is no right answer. You can also kill yourself by drinking too much or smoking too much despite obvious and widespread evidence that those things will kill you.

Yes, a "go to" like structure is required to implement this in some cases. The payoff of doing it is, there is a single exit point WHERE YOU CAN WRITE CODE THAT EXECUTES IN ALL CASES on the way out. In any subroutine that is larger than will fit on the screen at one time (and hence, harder for a code reader to understand), the single-exit-point tactic will be helpful to anyone maintaining that code far in the future.

The most despicable old code not only has multiple exit points, but those exit points are not easily identifiable. That is, there is no "return" statement at the actual point of exit. It just happens that that's where the thread of execution sometimes ends up, thus falling through to the bottom and exiting anyway. I have learned to value readability of code much higher after years of taking care of code that I wrote when I was in a hurry. So even if you include a "break" statement that kicks you out of a multiple-choice selection, adding a comment saying "done, let's exit" or something can be helpful.

It amazes me that people put up such strong resistance to something that has proved so useful over time.
 
Last edited:
  • #42
harborsparrow said:
I figure it's about a 99% chance that I'll get flamed for what I'm about to say, but here goes anyway.
I'll try not to flame.

This is a question about C++. Different languages have different design patterns. The single point of entry / single point of return is a very old rule aimed at very old languages where responsibility for cleaning up resources is almost entirely the responsibility of the caller rather than the callee. The single point of entry part of the rule has pretty much been removed. Multiple entry points was one of the many failed experiments in computer languages.

What you described is a rather old school way to test code. A more modern view is that you test some function by writing a driver that calls the function in various ways. An even more modern view is that you write the test *before* you write the function. The test is the spec.

That's the ideal. The ideal and reality don't mix well. You never quite know what some programming construct is supposed to do until you write the construct, and then find you didn't think of all the angles. So you refine the test as you refine the code. The basic idea still holds, though: The test is the spec.

All of that ugly scaffolding and stubbing needed to test the artifact should in the unit test code. The subject of the test is nice and clean. There's little need to add garbage to the "real" code. Where you do, well, In C++ you can do some nasty stuff amongst friends. If the unit test code is a friend of the class under test, that test driver can poke into private parts. This is subject to some debate. Do you need to test the private interfaces?
 
  • #43
I am not sure this is a C++ question. The OP did not mention what language the Google engineer he quoted was working with and the OP himself chose to consider the statement only within C++. If the original source was referring to C++, you have to also consider Google's C++ style guidelines disallow the use of exceptions, and as I have stated many times, exceptions are effectively additional function exit points.

http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml

edit: Cannot type properly on this phone and voice input does not yet properly work for technical writing.
 
Last edited by a moderator:
  • #44
gmar said:
I am not sure this is a C++ question. The OP did not mention what language the Google engineer he quoted was working with and the OP himself chose to consider the statement only within C++.
While the OP did not specify the language posed by the Google engineer, he did pose the question in terms of C++. Different languages have different design patterns. The single entry / single return design pattern is aimed at older languages that don't have a concept of objects that clean up their mess (e.g., RAII in C++). In more modern languages you're much more likely to see a return (or throw) early design pattern.


If the original source was referring to C++, you have to also consider Google's C++ style guidelines disallow the use of exceptions.
The Google C++ style guidelines is widely perceived as a "how not to do it" example. It is rather old style, it actively encourages some bad habits. Strictly speaking, the "no exceptions" rule pretty much rules out the C++ standard library. One legitimate place you will see the "no exceptions" rule is in embedded programming. For example, the Joint Strike Fighter rules has this rule (but only because of adequate tool support; that standard is written about ten years old).

A much better guideline for non-embedded programming is C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Herb Sutter and Andrei Alexandrescu. This book is also ten years old, but it has a much more modern point of view. Exceptions are encouraged rather than discouraged (but only for truly exceptional problems). Since every file has an EOF, hitting EOF should not be treated as an exception. Users always mess up when typing in data; goofy user input is not an exception.

Sutter and Alexandrescu's Rule #0 is "Don't sweat the small stuff." That includes things like specific indentation rules. Code should be indented, but exactly how is inevitably going to cause friction. They also take my stance on modifying other peoples code. The modifier should adapt to the existing style.

Sutter and Alexandrescu explicitly call out Hungarian notation and single entry / single return as two old-style rules that should be avoided.

I have stated many times, exceptions are effectively additional function exit points.
That is exactly how they function from a McCabe complexity point of view. Multiple return statements do not function that way. Draw the graph of the function. Those multiple return statements all return to the statement after the call to the function. Exceptions "return" somewhere else up the call stack, even to the function that calls main() if the exception isn't handled. The same goes for calls to exit(), abort(), many calls to kill(), and calls to locally written functions that do not return (because they in turn throw or call exit, abort, or kill). Those are the alternate exits that McCabe warned about needing to be counted rather than multiple return statements.

Finding a tool that counts complexity correctly is not easy. Consider this code:
Code:
void conditional_as_if (bool condition) {
   if (condition) {
      do_something();
   }
   else {
      do_something_else();
   }
}

void conditional_as_switch (bool condition) {
   switch (condition) {
   case true:
      do_something();
      break;
   default:
      do_something_else();
      break;
   }
}

void conditional_as_ternary (bool condition) {
   condition ? do_something() : do_something_else();
}
Many compilers will generate the exact same machine code for all three functions. There is no difference between them. All three functions have the same cyclomatic complexity, two. Many tools don't even get the if-else version right; they report the complexity as one. Straight line code has a complexity of zero with these tools. Some tools get the if-else version wrong and report a complexity of three because of the presence of the else clause. Many tools miss the ternary operator, seeing that as straight line code. It's not. Most tools count the switch implementation as more complex than the if-else implementation, even the ones that don't mistakenly bump the complexity for the else statement.

I've yet to see a tool that properly bumps the complexity for alternate exits but not for multiple returns. How to do this for calls locally-written functions that do not return? The only way I can see how to properly count these is to mandate commenting the line after such calls with "// NOT REACHED" and counting those comment lines.
 
  • #45
There's a lot of points to cover in this thread.

We know their (Google's) C++ style is possibly controversial but the point is, firstly if a Googler said "only use single points of return" then, assuming he meant C++, it has to borne in mind the environment in which he works, and secondly, when considering that environment i.e. those guidelines, one has to consider that they are intended to work with the company's current codebase.

In my opinion, to go into more detail about JSF the 'toolset driven' reason D H quotes that they cite for disallowing exceptions is almost certainly related to the unpredictability of the execution time. I don't think that's been directly solved but the situation is improving. In an ideal world, both systems programming and embedded would have the same intolerance of code failure.

In terms of exceptions being more common in newer code, mostly, but I see a movement away from them as concurrency takes off. My current toy is Rust, which does not have true exceptions. I could talk for ages on Rust but I won't. :-)

I don't do or understand cyclic complexity. To me it is another of those measurements that non-technical managers like to add in, like SLOC. Code review goes quite a long way towards addressing this issue. Perhaps when the formal verification guys can actually deliver, it may become more important for me if the tools only work on code with a complexity under a certain threshold.

I don't really get why people have trouble with source code formatters. In my mind the technical issue stems from whitespace, most commonly where you break a long line. This isn't problematic if handled consistently, which may involve reformatting non-compliant legacy code. (What is the point of having a standard if you accept code that violates it?) In my experience the other problems are human-related i.e. procedural. (Time to get new job, as D H would say!)
 
  • #46
The simple fact is that a lot of C++ programmers deal or have dealt with a lot of C code. A lot more deal with the C++ code that has been created or maintained by programmers dealing or having dealt with a lot of C code.

In C, "the single point of return" is almost invariably the only way to deal with the cleanup and debugging requirements rationally. That rationale, using the chain I described above, gets transferred to C++ coding, where it is certainly double and does not look completely wrong. However, as already stated, it is not necessarily the best way in C++ to address the underlying problem: cleanup and debugging. It may be OK, but it is certainly not by default.
 
  • #47
D H said:
I'll try not to flame.

What you described is a rather old school way to test code. A more modern view is that you test some function by writing a driver that calls the function in various ways. An even more modern view is that you write the test *before* you write the function. The test is the spec.

Granted the above to be true. However, a great deal of programming work is managing code that was written before the above ideal became common. And in languages lacking a test case harness. Much of the code I currently manage (for a team of biologists) fits this mold.
 
  • #48
gmar said:
In my opinion, to go into more detail about JSF the 'toolset driven' reason D H quotes that they cite for disallowing exceptions is almost certainly related to the unpredictability of the execution time. I don't think that's been directly solved but the situation is improving. In an ideal world, both systems programming and embedded would have the same intolerance of code failure.
The JSF C++ Coding Standardsstandard doesn't say why. It just says that "Rationale: Tool support is not adequate at this time."

I'll offer another opinion: Lockheed did not want to preclude using Green Hills Software as the development toolkit. (Lockheed did eventually choose Green Hills for the development platform.) Green Hills has historically taken a dim look at the "more advanced features" of C++. Some of these, most notably exceptions, templates, virtual member functions, operator overloading, namespaces, multiple inheritance, and RTTI, they simply chose not to implement those features for a long time.

I don't do or understand cyclic complexity. To me it is another of those measurements that non-technical managers like to add in, like SLOC. Code review goes quite a long way towards addressing this issue.
It's a useful metric for indicating how many test cases need to be written, for indicating where the trouble spots are likely to lie, and for estimating costs of future projects with similar size and complexity to an existing one.

I don't really get why people have trouble with source code formatters.
Suppose you are charged with making a slight mod to a chunk of code. You look at the code and see that the programmer used white space to make the equal signs in a chunk of assignment statements line up vertically. So you temporarily set your IDE to auto-align on the equals sign in assignment statements. What you didn't see is that code is chock full of code like this:
Code:
  this_var    = this_value;
  that_var    = that_value;
  another_var = another_value;
  a_variable_of_a_different_color = yet_some_other_value;
  a_variable_related_to_the_above = yet_a_different_value;

Obviously the programmer thought of these as representing two sets of assignments. Your IDE doesn't. It will align all five of these statements on the equal signs. Should there be a blank line separating the two sets? That would certainly help an automated tool, but not everyone uses automated tools. I don't write code like this, but I certainly have seen it.

Another problem: You use four space indent and write switch statements like this:
Code:
switch (condition) {
    case First:
        do_the_first_thing();
        break;
    ...
    case Last:
        do_the_last_thing();
        break;
}

You are charged with maintaining some code that's indented like this:
Code:
switch (condition) {
case First:
  do_the_first_thing();
  break;
...
case Last:
  do_the_last_thing();
  break;
}
When you re-indent to suit your style, make your changes, and then convert back to the original style you will be making changes to code that you did not change.

That's just two examples. There are many more. If the original coder used an IDE, it's best to use their settings when you are modifying their code. If they used vi or emacs, you had best not use an IDE at all.

As a reviewer, I'm go to use something like git blame to see what changes you have made, and white space changes are changes. The last thing I want to see as a reviewer is a bunch of changes that aren't really changes.
 
  • #49
D H said:
The JSF C++ Coding Standardsstandard doesn't say why. It just says that "Rationale: Tool support is not adequate at this time."

I'll offer another opinion: Lockheed did not want to preclude using Green Hills Software as the development toolkit. (Lockheed did eventually choose Green Hills for the development platform.) Green Hills has historically taken a dim look at the "more advanced features" of C++. Some of these, most notably exceptions, templates, virtual member functions, operator overloading, namespaces, multiple inheritance, and RTTI, they simply chose not to implement those features for a long time.

I don't see how those opinions differ. I just did some Google-fu.

JSF++ is for hard-real time and safety-critical applications (flight control software). If a computation takes too long someone may die. For that reason, we have to guarantee response times, and we can't - with the current level of tool support - do that for exceptions. --- source: http://www.stroustrup.com/bs_faq2.html

I think we are in agreement. I didn't know he was involved in the project until I noticed he wrote "we". I guess that commerically, C++ did "need" some kind of high-profile embedded project.

D H said:
...whitespace...

I am not at all convinced by this.

If non-trivial code isn't formatted according some kind of formal standard, it is broken. We fix broken code.

Those assignments using horizontal position of the equals sign to divide them into sections: Yes, I've also seen code like that. It's plain wrong. You understandably believe it should have a blank line. I would also accept a blank line or a comment. Using automated tools to add or remove blank lines from the middle of functions is in my opinion also like reformatting comments: don't do it.

The leading whitespace in the switch: You don't have to have the same style for everything, so long as your tools can apply the correct template to the correct file, you can mix multiple nesting styles in the same project. If it *does* flag a change and cause a rebuild, it will happen only once, the first time the tools touch it, and, again, it means it was broken.
 
  • #50
gmar said:
In my opinion, to go into more detail about JSF the 'toolset driven' reason D H quotes that they cite for disallowing exceptions is almost certainly related to the unpredictability of the execution time.

Possibly. However, I think it had more to do with the fact that exceptions require support by the compiler, the runtime library and very frequently by the OS or whatever substrate there is for execution. None of which may exist in some particularly dumb environment. Besides, whatever mechanics is used for exception may be incompatible or unavailable in certain modes (as in, for example, in Windows kernel mode at a raised interrupt request level).
 
  • #51
gmar said:
I don't see how those opinions differ.
I do. Green Hills was and to some extent remains the dominant supplier for this type of hard real-time, safety-critical software. Allowing exceptions would have precluded Green Hills. That was not a decision to be made lightly.

The hard real-time, safety-critical military software community had just recently made the switch from Ada to C++. It was the Ada exception model was the primary inspiration for the C++ exception model. The dim view that Green Hills took toward Ada exceptions was also the primary inspiration for their not liking C++ exceptions. The dominant supplier of hard real-time, safety critical Ada development platforms prior to C++? It was Green Hills. It's no surprise that this real-time, safety-critical military software community banned the use of C++ exceptions. They had already banned their use in Ada!

Perception and FUD had a lot to do with this decision. The vendors who truly did support C++, as opposed to C with Classes, which is what the EC++ subset really should be called, were new kids on the block and not quite trusted. Green Hills had been working in this domain for a long time.


I guess that commerically, C++ did "need" some kind of high-profile embedded project.
The hard real-time safety-critical military software community needed a language that encouraged safe programming practices, and by 1997 it was becoming apparently that this language was not Ada. Ada programmers were just too few in number, tool support was minimal because Ada remained a cottage industry language as opposed to a commodity language. That language wasn't C. Heaven forbid! The mindset of those who like bondage and discipline languages such as Ada balk at languages such as C. Ada was invented partly in response to the growth of languages such as C. C++ on the other hand has a lot of safe language features of Ada; these were once again inspired by Ada. C++ had two key things Ada lacked: A good amount of commercial support and a growing programmer base. C++, not C, was a natural fit.


If non-trivial code isn't formatted according some kind of formal standard, it is broken.
That's your opinion. How far do those standards go? Down to spacing before equals signs in an assignment statement? That to me is ludicrous. It's the IDE that's broken here, not the code.

My opinion of IDEs? I am not a fan. They're at what I call the six month old puppy stage, and they will remain at that stage until the hard AI problem is solved. The "six month old puppy stage" -- When you buy a brand new puppy, you *know* it's going to pee on the floor. It's guaranteed, and you'd better be prepared for that inevitability. You don't beat the puppy. You beat yourself for not being prepared. When she becomes an older dog? She'll hold her bladder until it bursts rather than pee on the floor if you go out for supper and she happens drinks too much water while you're gone (or even if you were a doofus and didn't let her out before you went). On the other hand, a six month old puppy will have learned that you absolutely do not approve of peed-on carpets, but sometimes she'll just forget.
 
  • #52
In the past 3.5 weeks, I have written 4116 lines of code. That count doesn't include blank lines, comment line, or lines that have nothing more than an open or close squiggly brace.

Actually, in the process of writing those lines, I have probably written many hundred more - and then deleted them again as I rewrote and rearranged the code to avoid having the same basic function repeated. That's one of my rules - avoid cut and paste code, avoid duplicate functions. They are the real maintenance issues.

While I am working the code - there is no room for anything other than what I want the code to look like. I don't choose an arbitrary style, I choose a style that will let me find things in seconds. This may or may not be a style that a reviewer or someone else finds appealing - but that's a different project.

If I am looking to make non-trivial changes to the code, I do not worry about how much a code reviewer is going to have to look at. In fact, by the time I am finished, there may be twice as many modules all with different file names from what was there to start with.

And, on the topic of "trivial changes", you have to be careful. Before making any change - you need to scope out its general effects. In most cases, the change you're making is not one that was anticipated, so you need to own the code and really understand what is relying on what.

If you put up standards that interfere with the coding process itself - you are going to loose.

I am unsympathetic to the issues of reviewing code that's been reformatted. I've done such reviews. The changes are not difficult to identify. Aside from ignoring white-space changes on a line (which MS Sourcesafe can do), you can also reformat both the before and after to the same pattern and compare them.

There are much more important things to look for in a code review than style.
 
  • #53
There's an underlying issue here that I think people are on different sides of and perhaps not realising.

This is, "Do you believe that your coding standard is part of your project's specification?"

If you do, it's easier to come to the decision to reject non-compliant code. Especially in the newer design methodologies where developers have input into the specification. For example Scrum, where feature requests ("user stories") can come from developers as well as the end-user.

If you don't, it's a lot harder. For the people in the don't camp, do you consider the choice of language to be part of the specification? Some projects require the use of language X. Why is this rule considered more concrete than specifying ways in which said language can be used?

Really the discussion about specific requirements (spaces before equals) are symptom of the greater issue, which is the one above.
 
Last edited:
  • #54
gmar said:
There's an underlying issue here that I think people are on different sides of and perhaps not realising.

This is, "Do you believe that your coding standard is part of your project's specification?"
You are making an implicit assumption here of a very rigid, very authoritarian, very complete, and machine testable coding standard. I've seen such standards, and I never liked a single one of them.

Your opinion and mine are at odds. You apparently like the widely varying opinions on how to best write code to be codified as rules that dictate which opinion is the one and only right opinion. I don't see that, personally. When I'm in charge (sometimes I'm a peon, sometimes I'm in charge), I make sure the coding standards are not rigid. That this means the standards aren't fully testable / fully automatable doesn't bother me.

I'm a big fan of Sutter's and Alexandrescu's rule #0: Don't sweat the small stuff. Standards that say exactly how to indent, exactly how to use whitespace, that one should use if (0 == variable) rather than if (variable == 0) -- that to me is sweating the small stuff.
 
  • #55
If you violate this rule you must be careful that you are not skipping things that you routinely put at the end of a subroutine (incrementing counters, changing pointers, freeing memory, etc.). I violate this rule to skip cases that do not apply in the function, but I occasionally pay the price. In safety critical code, you have to take a hard look at any violations.
 
  • #56
What rule? Yes, the single point of entry / single point of return is a common rule in C, particularly in low level systems programming. It is not a widely accepted rule in C++. If anything, it is oftentimes viewed as an anti-pattern in C++. As gmar wrote in post 39, "One community's pattern is another community's anti-pattern."

There is an overarching concept here that applies to all languages, related to three key questions:
- Does the function clean up after itself?
- What happens if things go wrong?
- What guarantees does the function make?

Done right, the last question encapsulates the answer to the other two. There are a number of different guarantees a function can provide.
  • Good question! Anything might happen if you use this product.
    This is the non-guarantee that most software products demand that the user of the product consent to.
  • I guarantee that bad things will happen if you use the product incorrectly.
    For example, calling sqrt(-1) can make your program go belly up, depending on how you've set the floating point error handling. But since the man page for sqrt documents this behavior, that's OK! It's a feature, not a bug.
  • I guarantee that nothing can possibly go wrong.
    For example, any reasonable implementation of the sine function will be able to make this guarantee. (Note: An implementation that uses Taylor expansion is not "reasonable".)
  • I guarantee that the function clean up after its mess (but it might leave a side-effect or two).
    For example, if you call the C scanf function to parse six integers from a stream but the stream only contains three integers and an EOF, the function will return an error code saying something went wrong. However, those first three integers will typically be parsed and stored. The function obeys the basic guarantee in that no matter what happens, any resources allocated internally are released prior to returning.

    This basic guarantee, "I clean up my mess even if things go wrong", is not easy to achieve. There's a lot more to resources than allocated memory. File handles, internet connections, shared memory, semaphores, mutexes: All of these are resources, many of which encapsulate multiple low-level resources that are invisible to all but system programmers.
  • I guarantee that the function cleans up after its mess and has no side effects unless everything works.
    This is the guarantee that database programmers expect to see. A commit either commits everything that was requested or it leaves the database untouched if something goes wrong. This guarantee is extremely hard and oftentimes very expensive to achieve. Even most kernel code does not make this strong guarantee.
 
Last edited:

Similar threads

  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 45 ·
2
Replies
45
Views
5K
  • · Replies 6 ·
Replies
6
Views
12K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
5K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K