Should I #include an entire library if I onlt need 1 function from it?

  • Thread starter Thread starter Jamin2112
  • Start date Start date
  • Tags Tags
    Function
AI Thread Summary
The discussion centers on the best practices for using standard libraries in programming, particularly regarding the implementation of a swap function and the use of constants like INT_MAX. It emphasizes the importance of leveraging standard libraries instead of creating custom solutions, as this can lead to unnecessary complications and inefficiencies in code compilation. The conversation clarifies that standard header files, such as <algorithm>, do not contribute to the final object file unless their functions are explicitly used, and that modern compilers may utilize precompiled headers to optimize the process. It also highlights the potential pitfalls of copying constants directly into code, which can hinder portability and readability. The concept of "premature optimization" is discussed, cautioning against over-optimizing code at the expense of clarity and maintainability. Overall, the consensus advocates for using standard libraries to ensure code quality and longevity.
Jamin2112
Messages
973
Reaction score
12
What's the protocol on this? I'm writing a program right now and I need a swap function, a climit constant and a couple other things. I could obviously write my own swap function, copy the value of INT_MAX, etc., but isn't the protocol to always use standard libraries instead of reinventing the wheel? A lot of unnecessary **** is going to compile.
 
Technology news on Phys.org
Jamin2112 said:
What's the protocol on this? I'm writing a program right now and I need a swap function, a climit constant and a couple other things. I could obviously write my own swap function, copy the value of INT_MAX, etc., but isn't the protocol to always use standard libraries instead of reinventing the wheel? A lot of unnecessary **** is going to compile.
The header files, such as stdio.h and the newer C++ header files without the .h suffix, are not considered libraries. They are just files that contain constants and function prototypes, but generally not the definitions for the functions. It's the linker that actually brings in the code that your program uses.
 
I think the original post is referring to the standard template includes like <algorithm> for std::swap. If your program only uses std::swap from <algorithm>, then that is all that will go into the object file produced by the compiler, so this is handled at the compile step as opposed to the link step. The rest of the templates in <algorithm> will impact the compile time, but they won't end up in the object file.

If you were to use some <stdlib.h> function, such as malloc(), then the object file that contains malloc() will be extracted from the library, but the rest of the object files will not be used. This is handled by the linker.
 
Jamin2112 said:
A lot of unnecessary **** is going to compile.

Not necessarily. Many compiler systems use precompiled versions of the standard header files.

In fact there is nothing in the standard which says that standard header files even have to exist as files in the computer's file system, so long as the compiler recognizes the standard names. In principle, everything in every standard header could be permanently hard-coded into the compiler, whether you use it or not.

As others have said, just declaring entities, but not defining them, does not generate any code.

Copying values like INT_MAX from standard files into your code is a VERY dumb idea. Wait till you port your code from your current wimpy little 64-bit 8Gb-memory PC to the new system you will buy in a few year's time... :smile:
 
Jamin2112 said:
What's the protocol on this? I'm writing a program right now and I need a swap function, a climit constant and a couple other things. I could obviously write my own swap function, copy the value of INT_MAX, etc., but isn't the protocol to always use standard libraries instead of reinventing the wheel? A lot of unnecessary **** is going to compile.
All of the responses you have received are correct. Include files, especially those for operating systems, are intended to be used without ****ing your executable.

There is one more item that may be of interest to you. In certain build environments, including the most common ones used for MS Windows, the "library" is actually resolved through DLLs - and a single set of those DLL is used by all applications and parts of the operating system. In those cases, access to a single function can cause a complete DLL to be loaded into memory. If this is a problem, there is a "static" library option that will cause only the needed functions to be linked in with the executable.
 
Jamin2112 said:
A lot of unnecessary **** is going to compile.
Yes, it is. Specifically, the compiler is going to either parse those headers (and all the headers they drag in, and all the headers those secondary headers drag in, etc.), or it's going to use precompiled headers and end up doing all of that except for the parsing part.

So what? You are doing something called "premature optimization" here. To quote Donald Knuth, "Premature optimization is the root of all evil (or at least most of it) in programming."

Code written for professional use is inevitably going to be read by humans, many times over. You are writing as much for the human reader as you are for the compiler. Code written for professional use is intended to have a long shelf life, and oftentimes portable across multiple architectures. The actions you are proposing will inevitably made your code less readable and less portable.

One last quote about professionally written code: "Write your code as if the person who maintains your code is a homicidal maniac who knows where you live."
 
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
Back
Top