Why are the C++ programming standards so inconsistent?

In summary, NASA requires all flight software projects and safety-critical software projects to have a programming standard and use static analysis tools to verify adherence to that standard. This includes reviewing the results of the analysis during peer reviews/inspections of code items.
  • #1
I like Serena
Homework Helper
MHB
16,336
258
D H said:
In every implementation I have come across, the key difference between #include "file_spec" and #include <file_spec> is that the former searches the directory containing the current file being processed while the latter does not. The Gnu compilers provide the -iquote option to define a quote search path that is distinct from the bracket search path, but I've never seen anyone use that option. Gnu also provides the now deprecated -I- option to split the search path into quoted and bracketed parts. I've never seen anyone use that, either. (I have however seen programming standards that forbid the use of this option.)

It's a pity that the various coding standards that I have seen are inconsistent as to when to use #include "file_spec" or #include <file_spec>.

The "High Integrity C++ Coding Standard" says:


High Integrity CPP Rule 14.9: Use <> brackets for system and standard library headers. Use "" quotes for all other headers. (QACPP 1011, 1012)

Justification: It is important to distinguish the two forms of #include directive not only for documentation purposes but also for portability. Different compilers may have different search methods and may not find the correct header file if the wrong form of #include is used.

Reference: Industrial Strength C++ 15.4;
 
Technology news on Phys.org
  • #2


I like Serena said:
The "High Integrity C++ Coding Standard" says:
One word: Yech.

First off, that standard explicitly contradicts itself. Right up front it says "Some of the rules in this standard are mutually exclusive, hence only a subset of rules should be selected from this standard." The best thing to do with a standard that contradicts itself is to toss it. I could go on and on about how truly bad this standard is, but that would drag this thread off topic. Instead I'll just say one word: Yech. Oh yeah, and one more word: Next.
 
  • #3


D H said:
One word: Yech.

First off, that standard explicitly contradicts itself. Right up front it says "Some of the rules in this standard are mutually exclusive, hence only a subset of rules should be selected from this standard." The best thing to do with a standard that contradicts itself is to toss it. I could go on and on about how truly bad this standard is, but that would drag this thread off topic. Instead I'll just say one word: Yech. Oh yeah, and one more word: Next.

I take it that you do not write code for "Industrial Strength" or "Safety-critical" systems?
 
  • #4


I like Serena said:
I take it that you do not write code for "Industrial Strength" or "Safety-critical" systems?
Discounting flight software for human spaceflight systems or software to verify and validate such systems, the answer is "No, I don't."

Discounting that I also help writing standards for such software, the answer is once again "No, I don't."

The standard you reference is one that I looked at and rejected.
 
  • #5


D H said:
Discounting flight software for human spaceflight systems or software to verify and validate such systems, the answer is "No, I don't."

Discounting that I also help writing standards for such software, the answer is once again "No, I don't."

The standard you reference is one that I looked at and rejected.

So do you have a reference you'd like to quote from?
I'm assuming that projects in which you write flight software requires you to follow a coding standard (depending on the required safety level)?
 
  • #6


I like Serena said:
So do you have a reference you'd like to quote from?
Embarrassingly, yes. My personal laptop, my personal desktop, my work laptop, and my work desktop machines all have a toplevel folder entitled "Extremely boring NASA standards". Contents include things like NPR 7150.2A, "NASA Software Engineering Requirements", NPR 7123.1A, NASA Systems Engineering Processes and Requirements, http://www.hq.nasa.gov/office/codeq/doctree/87398.htm , and so on and so on. Oh for the days of being a peon ...
I'm assuming that projects in which you write flight software requires you to follow a coding standard (depending on the required safety level)?
NASA requires all flight software projects and all safety-critical software projects to have a programming standard of some sort. If some project wants to mandate Hungarian notation, GNU indentation, or single point of entry / single point of return, fine. What they cannot do is levy such nonsense on other projects. Some projects make new, delete, throw, try, and catch verboten keywords, and that's fine too. Such rules might well make sense for some embedded processor, but they don't make sense agency-wide. NASA has been smart enough to avoid specifying agency-wide programming standards.
 
Last edited by a moderator:
  • #7


D H said:
Embarrassingly, yes. My personal laptop, my personal desktop, my work laptop, and my work desktop machines all have a toplevel folder entitled "Extremely boring NASA standards". Contents include things like NPR 7150.2A, "NASA Software Engineering Requirements", NPR 7123.1A, NASA Systems Engineering Processes and Requirements, http://www.hq.nasa.gov/office/codeq/doctree/87398.htm , and so on and so on. Oh for the days of being a peon ...

NASA requires all flight software projects and all safety-critical software projects to have a programming standard of some sort. If some project wants to mandate Hungarian notation, GNU indentation, or single point of entry / single point of return, fine. What they cannot do is levy such nonsense on other projects. Some projects make new, delete, throw, try, and catch verboten keywords, and that's fine too. Such rules might well make sense for some embedded processor, but they don't make sense agency-wide. NASA has been smart enough to avoid specifying agency-wide programming standards.

In your first volume it states:

3.3.2 The project shall ensure that software coding methods, standards, and/or criteria are adhered to and verified. [SWE-061]

3.3.3 The project shall ensure that results from static analysis tool(s) are used in verifying and validating software code. [SWE-135]

Note: Modern static code analysis tools can identify a variety of issues and problems, including but not limited to dead code, non-compliances with coding standards, security vulnerabilities, race conditions, memory leaks, and redundant code. Typically, static analysis tools are used to help verify adherence with coding methods, standards, and/or criteria. While false positives are an acknowledged shortcoming of static analysis tools, users can calibrate, tune, and filter results to make effective use of these tools. Software peer reviews/inspections of code items can include reviewing the results from static code analysis tools.



And in the Requirements Compliance Matrix, taken from the 3rd volume, we have a long list of requirements about the quality of the software, among which:

7.1.1.7. Software quality metrics are in place and are used to ensure the quality and safety of the software products.

Typically these metrics would measure how well the coding standard has been observed, the details of which would be specified elsewhere.


Tentatively, I conclude that your work has to be verified to adhere to coding standards.
The coding standard itself is not specified here, but as part of the project, there has to be a reference to it (there has to be a Quality Assurance manual of some sort that specifies this). Different projects can use different standards.
Or is there a disclaimer of some type, rationalising why this would not be necessary?

Actually, I'm wondering if HICPP is used within NASA, along with e.g. the QACPP tool to measure and verify adherence.
 
Last edited by a moderator:
  • #8


I like Serena said:
Tentatively, I conclude that your work has to be verified to adhere to coding standards.
Complying with programming standards is a tiny part, a very tiny part, of what a flight software or safety critical project has to do. No matter how strict such standards are, some people somehow manage to write incredibly bad but perfectly compliant code, and no matter how loose, there will often be perfectly valid, well-written code that nonetheless violates the standards. Coding standards are, in my mind, a necessary evil. That they have to exist at all is a sign of a less than optimal language. Bad standards such as the HICPP are not a necessary evil. They are evil incarnate.

Actually, I'm wondering if HICPP is used within NASA, along with e.g. the QACPP tool to measure and verify adherence.
Not when I can help it. One of the advantages of no longer being a peon is that I can help prevent such atrocities.
 
  • #9


D H said:
Bad standards such as the HICPP are not a necessary evil. They are evil incarnate.

You insist that HICPP is a bad standard.
Can you give a couple of examples?
That is, other than that there are 1 or 2 rules, with each stating explicitly that one or the other must be chosen?
 
Last edited:
  • #10
I have to admit, I'm rather curious to hear what you see as the problems with that coding standard, D H. My safety critical work was in C, and we had a different standard built up for that, so I don't know this one that well. Skimming it, so far, it seems like mostly good coding practice. But I just started looking at it, so I really do want to hear what you see as problems in there.

On the other hand, I'm not sure I agree with you that it "contradicts" itself. Under the circumstances (having to deal with C++, which gives you plenty of rope with which to hang yourself), I can see why they wanted to give alternatives.

Take, for example, guidelines 14.3 and 14.4. I prefer the way it's decribed in 14.4:
Code:
#ifdef SOME_FLAG
#   define SOME_OTHER_FLAG
#else
#   define YET_ANOTHER_FLAG
#endif
But I could work with the other way, not indenting them, if it was the standard. I would shrug that one off as a bit of a matter of taste. But it's best to have a standard in that area so the code is consistent.

The guidelines which are mutually exclusive are clearly marked, and I really don't consider that as contradicting itself. In fact, I agree with that decision. It's up to each implementer to go through and specify which rules are part of their standard, IMO.

I'll just leave with a short but optional story of coding standards gone wild. I'm talking about the usual "no magic numbers" rule. Which I agree with in spirit. However, we had a lint like process that identified any bare numbers in the code and complained.

Now, sometimes, you just don't have a good name for some constant in a calculation. There must be exceptions to such a rule. You try implementing Runge-Kutta type algorithms without "magic numbers", for example. We actually had developers doing this:

#define TWO 2

That would shut up the code checker, but as you might imagine, doesn't really lead to better code. In most cases, it just made it harder to check the correctness of the algorithm.
 
Last edited:
  • #11
Just as a starter, it's 71 pages long. This puts the emphasis on the wrong syllable. Coding represents 10% or less of the effort in a safety critical project. Remember rule #0: Don't sweat the small stuff.

Rule 2.1. Comments in the code are not the place to document waivers, nor are they place to document significant design decisions. That said, this is the *only* rule on comments? Please.

Rule 3.1.1. This is a good rule in general except for the fact that C++ (stupidly) provides, a gratis, a default constructor, a copy constructor and an assignment operator. It is a common practice to declare the copy constructor and assignment operator private and, oops, forgot to implement them. Fortunately C++0x will fix this gaping hole in the language. Until then, I want to see that the programmer has used this trick right up front.

Rule 3.1.4. Requirements should never specify an implementation. Ever.

Rule 3.1.6. This rule, like many of the rules in this standard, is chasing compiler problems that were solved in the previous millennium. It's past time to join the 21st century.

Guideline 3.1.7. Placing function definitions in the body of the class detracts from understanding the class. Even a simple three line function expands to ten lines or more if one properly documents the function such as for processing by doxygen. It is far better to have a one line comment and a prototype in the class body and to inline the function outside the body, preferably in a separate inline header file.

Guideline 3.1.9. Even a half-assed code review will detect duplicated code. All it takes is a reviewer to say "this code is duplicated in three places. That duplication will cost a lot of money in documentation, testing, verification, and validation." A half-assed code review will catch many of the problems mentioned in these standards. A good code review process will catch many, many more problems. Having humans evaluate a chunk of code still stands as the #1 way to catch bugs. Testing, V&V, and IV&V follow. Meeting or violating coding standards is way down on the list. It is best to keep the coding standards simple and stupid. This is one of many violations of that metarule.

Guideline 3.1.12. Pretty much useless in scientific programming (what, exactly, should << to an ostream do for a 300x300 spherical harmonics gravity model?), and downright stupid in an environment that says thou shalt not use C++ io (or C io). In some cases a related rule is useful (thou shalt provide a serialization/deserialization capability for all data), but that is rather special purpose.

Rule 3.2.2. This is a part of the C++ standard itself, and a short falling that one day will be rectified. Why repeat it?

Rule 3.3.5. Have these guys not heard of the using clause? Besides, a simple rule, "Thou shalt compile clean with <project-specific> compiler settings", will catch this problem and a whole lot more.

Rule 4.1. A rule on complexity is good. A limit of ten is not good. This rule of ten is a cargo cult rule with zero backing in practice. Multiple studies have shown that this limit is too low, that it in fact leads to buggier code. NASA IV&V uses 20 as a limit. There are occasions when an incredibly high complexity is perfectly OK. I have blessed (granted a waiver to) a function with a complexity greater than 500.

Rule 4.2. Static path count is one of many, many metrics that has a relatively low correlation with bugginess, quality, understandability, etc. Even worse, what it does measure is better measured by cyclomatic complexity (apparently even better is extended cylcomatic complexity). What about the #1 metric that correlates with bugginess? Not a mention. It's too trivial. After all these years of various people developing various metrics, nothing tops lines of code. Nothing.

Guideline 5.7. This flies in the face of recommended practice for iterators.

Rule 5.9. For one thing, it looks to me like this rule allows the use of Duff's device. For another, the single point of entry / single point of exit rule is responsible for a lot accidental complexity, maybe more so than any other ill-considered, cargo cult rule.

Rule 8.4.9. Sorry, STL vectors do not cut it when I want a 3-vector. Or a 3x3 matrix. Amazingly enough, 3 vectors and 3x3 matrices pop up quite often in my work. A lot more than does the need for that misnamed std::vector (it is not a vector).

Rule 10.1. Like grep, I too have seen fools mandate the use of #define TWO 2. Not using magic numbers is a good guideline. When specified as a rule, it is the fools who rule.

Rule 10.3. Yet another rule that is explicit in the standard. The example is particularly bad. Can't we just say "thou shalt not depend on undefined behavior"?

Rule 10.5. This rules out using a=b=c=0;

Guideline 10.6. This is one of the dumbest ideas ever. It looks like the authors of this standard developed rules by browsing every standard they could find.

Rule 10.10. Whatever the true intent of this rule is, what they said makes no sense. (And it flies in the face of some other languages whose goal is no side-effects.)That's a bit more than half-way through this work of art whose sole aim is to sell a product, and I did not hit all of the objectionable points. Do I really need to continue?
 
  • #12
D H said:
Just as a starter, it's 71 pages long. This puts the emphasis on the wrong syllable. Coding represents 10% or less of the effort in a safety critical project. Remember rule #0: Don't sweat the small stuff.

It does look more like a grab bag of (mostly) good programming practices rather than a good, usable coding standard. At least on it's own. I can see your point about it being too long.

Though I don't suppose I'd want a standard of this length, a lot of them might still be useful to consider for less experienced, perhaps new team mates. And also useful as guidelines during code reviews. Like your excellent point about the use of iterators and for loops in guildeline 5.7. Though what they say is sometimes true, iterators are a very notable exception. Important enough to make 5.7 wrong without that qualification. A less experienced developer might get it wrong trying to follow that rule slavishly.

Your comment on Guideline 3.1.9, on catching duplicated code in reviews, certainly reflects my views as well. A simple guideline warning against cut and paste programming should cover it, and I wouldn't object to that. Shouldn't even have to tell that to developers, but I've seen some code that makes me think it's worth repeating. Though I'd rather see it in a book on good programming practices than a standard.

Guideline 3.1.12 (on providing a << operator) is another good example of what I'd consider to be a good general design pattern in many cases, but not so much that it should be in a standard to be adhered to. Still, I do try and provide one to print something, even if it's minimal, and it makes sense.

D H said:
Rule 10.1. Like grep, I too have seen fools mandate the use of #define TWO 2. Not using magic numbers is a good guideline. When specified as a rule, it is the fools who rule.

I admit, I laughed when I heard someone else has had to face that kind of stupidity. My apologies, nobody should have to go through that. :rofl: I have my pride as a programmer, and there's a limit to what I will sign my name to. And I won't sign my name to code with "#define TWO 2" in it. Just not going to happen.

D H said:
Guideline 10.6. This is one of the dumbest ideas ever. It looks like the authors of this standard developed rules by browsing every standard they could find.
Well, if I'm trying to make less errors by writing things so that it eliminates certain possible errors, I figure it's worth it. So I'm "guilty" of doing that at times. But frankly, I haven't made that mistake in at least well over a decade (i.e. using = instead of == or !=, etc). I only usually see beginners make that mistake. But it could happen.

I should add that, to support your case, gcc with -Wall does complain if you do, for example, 'while (num = 1)'. Gcc isn't too bad for warning you of that stuff if you use -Wall. In a specific environment where I can control it, I would just say "use -Wall and leave no warnings" and not bother developers with coding standard rules for things the compiler gives warnings on.

That said, I agree with you that it does look like they just grabbed every rule they could find.

D H said:
Rule 10.10. Whatever the true intent of this rule is, what they said makes no sense. (And it flies in the face of some other languages whose goal is no side-effects.)

You're right about that. The example and justification they give makes sense, but the phrasing of the rule is just incomplete. Annoyingly so. I can't think of a better phrasing, but as written... ouch.

D H said:
That's a bit more than half-way through this work of art whose sole aim is to sell a product, and I did not hit all of the objectionable points. Do I really need to continue?

Well, you don't have to continue, but I'd certainly be interested. But don't feel you have to, of course. I know it takes time to go through it, and there's no obligation to continue. Not sure I agree with everything, but we agree more than not, and your points are well taken.

Thanks for explaining.
 
  • #13
Whoa, that's quite a long list!
All right, I accept the challenge :smile:.
Btw, it would be nice if you conceed a point every now and then :wink:.

D H said:
Just as a starter, it's 71 pages long. This puts the emphasis on the wrong syllable. Coding represents 10% or less of the effort in a safety critical project. Remember rule #0: Don't sweat the small stuff.

I disagree. In large scale software development, every bug that has to be found and solved is a pain in the ***, if only due to the overhead. If a couple of bugs can be prevented, that is worthwhile.

I know there are various studies that relate bugs in code to overhead and more importantly to catastrophic failures in practice.

And obviously NASA thinks it's important. I found the following quote:

"The NASA has a defect density of 0.004 bugs/KLOC (vs 5.00 bugs/KLOC for the Industry) but this has a cost of $850.0/LOC (vs $5.0/LOC for the Industry). Source: agileindia.org/agilecoimbatore07/presentations/… – Pascal Thivent Sep 21 '09 at 21:43"

D H said:
Rule 2.1. Comments in the code are not the place to document waivers, nor are they place to document significant design decisions. That said, this is the *only* rule on comments? Please.

Agreed, although this only means that you need additional coding guidelines.

D H said:
Rule 3.1.1. This is a good rule in general except for the fact that C++ (stupidly) provides, a gratis, a default constructor, a copy constructor and an assignment operator. It is a common practice to declare the copy constructor and assignment operator private and, oops, forgot to implement them. Fortunately C++0x will fix this gaping hole in the language. Until then, I want to see that the programmer has used this trick right up front.

Yes, this is a good rule.
As for the gaping hole in the language, it is addressed a little further in the standard (Guideline 3.1.13), that is, the copy constructor and assignment operator are required to be defined. We use a template that defines them private (and not implemented) by default. And we use a static analysis tool that warns if they're missing.

D H said:
Rule 3.1.4. Requirements should never specify an implementation. Ever.

Actually, this is a bad rule for defining a copy-constructor, because of the implications it has that are not addressed.
However 3.1.5 (exclusive with 3.1.4) is a good rule. I've already seen too many stupid bugs because this rule was not observed.
And note that it is a check list, not an enforcing implementation format.

D H said:
Rule 3.1.6. This rule, like many of the rules in this standard, is chasing compiler problems that were solved in the previous millennium. It's past time to join the 21st century.

How were they solved (inline virtual functions)?

D H said:
Guideline 3.1.7. Placing function definitions in the body of the class detracts from understanding the class. Even a simple three line function expands to ten lines or more if one properly documents the function such as for processing by doxygen. It is far better to have a one line comment and a prototype in the class body and to inline the function outside the body, preferably in a separate inline header file.

No argument there, although I don't like long inline functions in header files.

D H said:
Guideline 3.1.9. Even a half-assed code review will detect duplicated code. All it takes is a reviewer to say "this code is duplicated in three places. That duplication will cost a lot of money in documentation, testing, verification, and validation." A half-assed code review will catch many of the problems mentioned in these standards. A good code review process will catch many, many more problems. Having humans evaluate a chunk of code still stands as the #1 way to catch bugs. Testing, V&V, and IV&V follow. Meeting or violating coding standards is way down on the list. It is best to keep the coding standards simple and stupid. This is one of many violations of that metarule.

I think you're agreeing with the Guideline?
Note that the Coding Standard does not say how the rules are verified, although I think it's a good thing to have a static analysis tool generate warnings for duplication.

D H said:
Guideline 3.1.12. Pretty much useless in scientific programming (what, exactly, should << to an ostream do for a 300x300 spherical harmonics gravity model?), and downright stupid in an environment that says thou shalt not use C++ io (or C io). In some cases a related rule is useful (thou shalt provide a serialization/deserialization capability for all data), but that is rather special purpose.

I agree, but this is a "Guideline", which means it is not required unless the programmers in your firm want it.

D H said:
Rule 3.2.2. This is a part of the C++ standard itself, and a short falling that one day will be rectified. Why repeat it?

If members are initialised in the constructor using other members, this leads to bugs, because the order may be different than you expect. On other words, some members may end up uninitialised.
Anyway, it bugs me no end if the order of members is haphazard and unpredictable.

D H said:
Rule 3.3.5. Have these guys not heard of the using clause? Besides, a simple rule, "Thou shalt compile clean with <project-specific> compiler settings", will catch this problem and a whole lot more.

I don't understand.
I you overrule one version of a virtual method, it's weird if you don't overrule all versions of that same virtual method.

D H said:
Rule 4.1. A rule on complexity is good. A limit of ten is not good. This rule of ten is a cargo cult rule with zero backing in practice. Multiple studies have shown that this limit is too low, that it in fact leads to buggier code. NASA IV&V uses 20 as a limit. There are occasions when an incredibly high complexity is perfectly OK. I have blessed (granted a waiver to) a function with a complexity greater than 500.

I can predict the reason for your waiver: it's a switch-statement isn't it?
I believe there should be an exception for a switch-statement.

Even better would be a separate rule for switch-statements.
I believe it's a bad thing if many methods in your class have the same switch-statement. It means the design of the class is flawed.

I agree that 10 is too low, but as you said, your company is free to change or discard any of the rules in the standard. It's enough that such is stated in a quality assurance manual.

D H said:
Rule 4.2. Static path count is one of many, many metrics that has a relatively low correlation with bugginess, quality, understandability, etc. Even worse, what it does measure is better measured by cyclomatic complexity (apparently even better is extended cylcomatic complexity). What about the #1 metric that correlates with bugginess? Not a mention. It's too trivial. After all these years of various people developing various metrics, nothing tops lines of code. Nothing.

Agreed.
There should be a rule in here that says the the lines of code of a function should not exceed some number, instead of this one.

D H said:
Guideline 5.7. This flies in the face of recommended practice for iterators.

This rule is indeed too indiscriminate.
But I do believe that a loop-condition should not be an unnecessarily complex expression which causes an unnecessary performance penalty.

D H said:
Rule 5.9. For one thing, it looks to me like this rule allows the use of Duff's device. For another, the single point of entry / single point of exit rule is responsible for a lot accidental complexity, maybe more so than any other ill-considered, cargo cult rule.

I agree, although almost all of my colleague's don't.
I do believe it's best if all programmers in a team use the same structuring for a function, so I've stopped arguing about this.
Btw, I found the reference to Duff's device interesting reading material.

D H said:
Rule 8.4.9. Sorry, STL vectors do not cut it when I want a 3-vector. Or a 3x3 matrix. Amazingly enough, 3 vectors and 3x3 matrices pop up quite often in my work. A lot more than does the need for that misnamed std::vector (it is not a vector).

A 3-vector or 3x3 matrix is NOT an unbounded array.
And btw, a vector does *not* enforce array bounds.
I do believe that for arrays of variable or large length, vector really is the best choice.

D H said:
Rule 10.1. Like grep, I too have seen fools mandate the use of #define TWO 2. Not using magic numbers is a good guideline. When specified as a rule, it is the fools who rule.

Have you ever seen a function call like:
some_function(2, 3, true, false, false);
If I read something like that I think: what the **.
And it doesn't becomes better with the use of TWO ;-).

I agree that a warning on '2' in a mathematical formula is probably a false positive.

D H said:
Rule 10.3. Yet another rule that is explicit in the standard. The example is particularly bad. Can't we just say "thou shalt not depend on undefined behavior"?

Ah, but we still need a list of all the "undefined behaviors" there are.
The coding standard is a good place to put that list.

D H said:
Rule 10.5. This rules out using a=b=c=0;

I see the mathematical algorithms popping up again.
That's where you (want to) use that, right?
For mathematical algorithms I agree.
In my experience in other cases you don't really need this, and in general the code is easier to read if declarations and initialisations are on different lines.

D H said:
Guideline 10.6. This is one of the dumbest ideas ever. It looks like the authors of this standard developed rules by browsing every standard they could find.

Better safe than sorry.
What's wrong with coding guidelines that make sure a programmer can't make a mistake even if he wanted to?

And yes, I believe they browsed every standard they could find and tried to put at least the most useful rules in.
But knowing programmers, each and every one (including myself), disagrees with any number of rules.
I still think it's a good idea that the programmers in a team use a common set of coding guidelines (even if they disagree with them).

D H said:
Rule 10.10. Whatever the true intent of this rule is, what they said makes no sense. (And it flies in the face of some other languages whose goal is no side-effects.)

Basically it says: do not write redundant code.
The term "side-effect" may be misleading here.
It's a statement that doesn't "do" anything.

D H said:
That's a bit more than half-way through this work of art whose sole aim is to sell a product, and I did not hit all of the objectionable points. Do I really need to continue?


I see you have skipped a lot of the really useful rules.
Can it be you agree with a number of them?

Note that any rule can be changed or discarded.
It's enough that such is stated in a quality assurance manual (which is required).
 
  • #14
Grep said:
Well, you don't have to continue, but I'd certainly be interested.
I'll just pick out a few of the doozies.

Rule 8.3.3. Missed this one earlier. What about mutable and volatile? Except for mutexes, mutable should in my mind by a verboten keyword. Regarding volatile, many compilers do not handle volatile correctly (e.g., see http://www.cs.utah.edu/~regehr/papers/emsoft08-preprint.pdf), so should volatile. There are always waivers, and the waiver for use of volatile should include tests of the usage to ensure that the system behaves as expected.

Rule 11.4. One of the ugly things about C++ is that there is no "take the reference of" operator. One side-effect of this shortcoming is that on reading a statement such as foo=bar.baz(qux), there is no easy way of knowing whether qux is passed by value or by reference to bar's baz method. One has to look at the prototype for the method, making for a disjointed reading of the code.

Different coding standards very much disagree on this rule. Some (e.g., google) go so far as to ban non-const pass by reference. They mandate use of pass by pointer if the argument is to be modified. We didn't countenance that approach. It cuts off too much of the language and it requires a good defensive programmer to check for a null pointer. (Left unsaid is that it is possible to have a null reference, so an over the top defensive programmer might still check for that possibility.) Our standard makes pass by reference illegal for atomic data, pass by value illegal for non-atomic data, with pass by reference being the preferred mechanism for non-atomic data.

Rule 11.7. Returning a const reference to a temporary is perfectly valid in C++ and is arguably the "Most Important const".

Guideline 13.1. Well there goes all of <cmath>, or for that matter, a good-sized chunk of the language. The sad fact is that the language spec is a bit weak. Toy projects can avoid implementation-defined behavior. Realistic projects: Good luck with that.

Rule 13.6. As written, this rules out use of C++ <limits> and C <stdint.h>, <float.h> <fenv.h>.

Rule 14.9. This is the rule that started this thread. Very nice, but what exactly constitutes a system header? Example: is it #include <Xm/Xm.h> or #include "Xm/Xm.h"? What if the program is ported to a Mac, which doesn't provide Motif? I would say that the right usage is still <Xm/Xm.h>, but the notion of a system file is more than a bit fuzzy.

Rule 14.10. Really? REALLY? This is without a doubt the dumbest of the bunch, and may well be the Dumbest. Rule. Ever. The man page on sockets says to #include <sys/socket.h>. I expect to see exactly that in the source code that calls socket(). Writing #include <socket.h> and mandating that the program be compiled with -I/usr/include/sys is downright stupid. The same goes for #include <Xm/Xm.h>, and a host of others. Many software packages depend on having that partial path in the #include directive to make for a unique specifier.

Rule 14.15. Why rule 14.14 then? A good code review process will catch the use of macros where an inlined function would work quite nicely. Making this a rule means a waiver has to be granted. I want the waiver log to identify truly troublesome code. Foolish rules such as this will fill the waiver log with non-problems, hiding those truly troublesome items such as a function with a cyclomatic complexity of 500.

Rule 14.16. Stroustrup is wrong on this. The standard, rightfully so, has multiple rules against mixing types: don't compare a double to an int, etc. Yet I'm supposed to compare a pointer to an int? Pointers are conceptually different beasts than ints. C++0x will fix this problem. Until then, use NULL.

Rule 14.19. Didn't they say this already in rule 14.15?


This standard is chock full of duplicative rules, has too many rules that violate the way people think (e.g., if (0 != foo) { bar();}), and has far, far too many silly rules that sweat the small stuff.
 
  • #15
I like Serena said:
Btw, it would be nice if you conceed a point every now and then :wink:.
I will concede that some of their rules are valid. After all, by snarfing up every rule they could find, they were bound to come up with a handful that truly are valid. Another concession: at least they didn't specify what they think is the right way to indent code. Actually, with so many personal preference type rules I am quite surprised that they didn't do that.
I like Serena said:
D H said:
Just as a starter, it's 71 pages long. This puts the emphasis on the wrong syllable. Coding represents 10% or less of the effort in a safety critical project. Remember rule #0: Don't sweat the small stuff.
I disagree. In large scale software development, every bug that has to be found and solved is a pain in the ***, if only due to the overhead. If a couple of bugs can be prevented, that is worthwhile.
You missed the point. Too many of the rules are small stuff, personal preference rules. Many of the things the rules are finding aren't bugs. One of the reason so few projects use lint is that lint generates a lot of lint. This makes it hard to find the real problems hiding amongst the non-problems so as to make lint is worse than useless.

This standard exists for one reason only: To sell a tool. A lousy tool.

I like Serena said:
How were they solved (inline virtual functions)?
For example, with weak symbols, vague linkage, and vtables. This was a problem back in the day when most C++ compilers were not compliant. Nowadays this is pretty much a non-problem.

I like Serena said:
D H said:
Guideline 3.1.9. Even a half-assed code review will detect duplicated code.
I think you're agreeing with the Guideline?
No, I am not. You don't need a rule against cut-and-paste programming. This is just one of many well-understood rules that need not be spelled out. Automated tools that check for cut-and-paste programming tend to have a problem with false positives and false negatives.

Anyway, it bugs me no end if the order of members is haphazard and unpredictable.
You missed the point again. There is no reason to replicate in the programming standards something that is already in the standard for the language. There is no reason to have a special-purpose rule that can be handled by a simple rule such as requiring the code to compile -Wall clean.
I like Serena said:
D H said:
Rule 3.3.5. Have these guys not heard of the using clause?
I don't understand.
I you overrule one version of a virtual method, it's weird if you don't overrule all versions of that same virtual method.
Not necessarily. What is necessary is not to inadvertently hide one of those base class methods. The justification of the rule hints at the use of the using declaration. The text of the rule does not. Once again, a simple compile flag can catch this problem.

I like Serena said:
D H said:
Rule 8.4.9. Sorry, STL vectors do not cut it when I want a 3-vector.
A 3-vector or 3x3 matrix is NOT an unbounded array.
And btw, a vector does *not* enforce array bounds.
I do believe that for arrays of variable or large length, vector really is the best choice.
std::vector is far from the best choice is for what is inherently a fixed-length array. The boost libraries do offer and C++0x will offer a somewhat viable alternative, but not quite there. Too much of a computer science flair, and not enough scientific programming.

Ah, but we still need a list of all the "undefined behaviors" there are.
The coding standard is a good place to put that list.
No, it's not. The language standard is a perfectly good place to define undefined behaviors.
I like Serena said:
D H said:
uideline 10.6. This is one of the dumbest ideas ever.
Better safe than sorry.
What's wrong with coding guidelines that make sure a programmer can't make a mistake even if he wanted to?
1. It is stupid.
2. It is worse than stupid. It is counter to the way people think.
3. Where enforced, this rule causes more dissension amongst the ranks than any other.
4. It doesn't address the real problem, which is the use of the assignment operator in a boolean expression. Ban that and be done with it.

I see you have skipped a lot of the really useful rules.
Can it be you agree with a number of them?
Sure. They snarfed up a lot of rules. Some are bound to be useful. Too many are not. Too many are downright harmful. I'm not against programming standards in general. I am against bad ones, and this standard comes very close to the top of the list of bad standards.
 

What is the purpose of C++ programming standards?

C++ programming standards provide a set of guidelines and rules for writing code in the C++ programming language. These standards aim to promote consistency, readability, and maintainability of code, making it easier for programmers to understand and work with each other's code.

What are some common C++ programming standards?

Some common C++ programming standards include naming conventions for variables, functions, and classes; formatting rules for code indentation and line breaks; and guidelines for error handling and commenting. These standards may vary depending on the organization or community using them.

Why is it important to follow C++ programming standards?

Following C++ programming standards can improve the overall quality of code and make it easier to maintain and debug. It also allows for better collaboration between programmers, as everyone is following the same set of rules. Additionally, adhering to standards can make code more portable and compatible with different platforms and compilers.

Can C++ programming standards change over time?

Yes, C++ programming standards can change over time as the language evolves and new best practices emerge. It is important for programmers to stay updated on the latest standards and adapt their coding style accordingly.

Are there tools available to help enforce C++ programming standards?

Yes, there are various tools and plugins available that can help enforce C++ programming standards. These tools can automatically check for issues such as incorrect formatting, unused variables, and missing documentation. Some popular examples include Clang Format, Cppcheck, and PVS-Studio.

Similar threads

  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
4
Views
3K
  • Programming and Computer Science
Replies
14
Views
2K
  • Programming and Computer Science
Replies
30
Views
2K
  • Programming and Computer Science
3
Replies
81
Views
5K
  • Programming and Computer Science
Replies
2
Views
378
  • Programming and Computer Science
Replies
5
Views
817
  • Programming and Computer Science
2
Replies
65
Views
2K
  • Programming and Computer Science
Replies
25
Views
2K
  • Programming and Computer Science
Replies
19
Views
977
Back
Top