elias001
- 389
- 30
@.Scott i think I will keep your advice in mind when I start learning about databases on my own.
Does C get the "buff treatment"? I can interpret that phrase in a few ways - in all cases the answer is "No, it doesn't".elias001 said:@.Scott i am coming from the point of writing code and respecting memory management and safety issues. I did a quick Google search and apparently smart pointers is not in C, so does that mean C doesn't get the buff treatment compare to C++.
Although C++ is more powerful, in the development of automotive radar at a former employer, C was used. Many of the C++ features were seen as violating MISRA requirements and were not allowed in the actual product code. So, in that case, "C" was viewed as "more buff".elias001 said:@.Scott i am coming from the point of writing code and respecting memory management and safety issues. I did a quick Google search and apparently smart pointers is not in C, so does that mean C doesn't get the buff treatment compare to C++.
.Scott said:Null pointers are available in both C and C++. You are the first person I have heard describe them as "safe" or "armor". They are more commonly associate with terms like "Achilles heel", vulnerable, dangerous. Of course, they are easy to use. Malloc some memory for a structure and cast the null pointer to a pointer for that structure.
If your primary programming interest are dragons, etc, then safety really isn't an issue. The worse that can happen is that the program will crash because of a bad pointer or memory leak and ruin the game.
With more serious applications, there is a tool called "Coverity" that can do a very decent job of tracking through how you are using null pointers (which it will complain about) and other pointers - calling out bad indices, memory leaks, and score of other issues.
void *p = 0;
void *p = NULL;
void *p = nullptr;
assert(p);
if(!p) {
perror(“Unexpected NULL pointer (p): “);
exit(EXIT_FAILURE);
}
assert(p);
if(!p)
throw std::invalid_argument(“Unexpected NULL pointer (p).”); // or any other exception of your choice.
std::shared_ptr<Bespoke_Object> smart_ptr = std::make_shared<BespokeObject>( /*optional args to object constructor> */);
smart_ptr->bespoke_object_func();
smart_ptr.reset(new BespokeObject()); // or just plain reset if you’re the last client, else let it go out of scope.
Wow! That sounds like a mind boggling task! My hat's off to you!.Scott said:design a relational DB for the Air Force procurement center - which, at the time, was an extensive part of the Wright-Patterson AFB in Ohio
There are many complaints about pointers - and all pointers used in C and C++ have a pro and con constituency - with the 'con' group finding them "too dangerous".elias001 said:@FactChecker I saw from a youtube videos where it was shown that the creator of C++ publicily rally the C++ community to defend C++ because it is under threat or some such due to the memory safety issues. I am getting a bunch of books on C pointers. By the way, there should be a special pointer in C called either 'the middle finger' or 'the bird'.
Also you @Filip Larsen @.Scott @sbrothy and @jedishrfu can i ask you folks to comment or give your opinion about this post on some CS books. Thank you in advance.
Exactly, choosing the right tool for the job at hand. Procedural, functional, OO. Whatever fits..Scott said:There are many complaints about pointers - and all pointers used in C and C++ have a pro and con constituency - with the 'con' group finding them "too dangerous".
In the extreme, an integer such as 0xFF804500u might be cast to the a pointer to a large structure and then that pointer could be used to read and write from that "random" memory location. That's about as dangerous an ability as you can create. And yet, that is exactly what is done when you control a device using memory mapped registers - a very common method.
So, it's like a car or a shovel - potentially dangerous tools. So what do you do? Ban shovels and cars?
Hopefully you can help the users dig and drive safely.
It's the same with pointers. Any language that does not allow free use of pointers will be unable to support some applications. Or maybe you say for some certain applications, you stick to Python or some language that is very pointer controlled?
I would stick to the "driving a car" example. You might say that if the trip is less than a kilometer, you should do the "safe thing" and walk. But is walking all that safe - and in particular, is it safer than driving? It depends on the weather/health/neighborhood situation - and I would say it is best left to the traveler.
Similarly, I think the coding standards (including the selection of the coding language) should be left to the system and software designers/developers.
Hackers exploit careless coding - or simplistic passwords - or users that will had over control of the computer in exchange for a "Thank you". Unfortunately, under some conditions - as when Windows was being developed - there is a huge incentive to code carelessly.elias001 said:@.Scott and @sbrothy well I am not sure if it is the media or whoever, but they make it sound like programs and softwares that uses languages which uses manual memory management tools are more unsafe than a Bowser castle in a typical Mario Brothers game, and Hackers can exploit vulnerability and steal data like the way the Nintendo Kirby character sucks up bad guys.
elias001 said:@.Scott and @sbrothy well I am not sure if it is the media or whoever, but they make it sound like programs and softwares that uses languages which uses manual memory management tools are more unsafe than a Bowser castle in a typical Mario Brothers game, and Hackers can exploit vulnerability and steal data like the way the Nintendo Kirby character sucks up bad guys.
.Scott said:Linux had less of a problem because it was less of a target and because it had better review.
I believe the number one code vulnerability exploited by hackers is buffer overruns that are triggered by trusting that incoming data will be properly formed.
In general, trusting that any major interface will be used properly needs to be thought through. Even if the party on the other side of a library function call is internal and trusted (even if it is you), doing validation checks on the calls through that interface allows you to fence in any problems that you run into later. For example, if you write a trig library for yourself, you don't want arcsine(2) to result in a core dump, a memory leak, or any memory overwrite. It's not that you might turn evil on yourself - but if you make a mistake, you want to be able to track it down routinely.
Of course, if you have no reason to trust the other side of an interface, you need to be paranoid. If you receive a record over the internet that's 30 bytes containing a field that claims to be 3000 bytes long, don't malloc 30 bytes and then copy 3000 bytes into it. That may sounds bizarrely obvious, but that's exactly what hackers look for and find.
In a lot of cases, it's not the core operating system itself but software provided by app vendors or hardware vendors. In many cases, that software needs special OS access privileges - but the code is completely trusting - in fact, it a lot of cases it will only work under near perfect conditions. And it didn't help that Microsoft Windows started out with convoluted and poorly document hardware driver rules. MSDOS was simple enough. But from there through XP, the best source of documentation was often hunting for examples in the Microsoft source code to find code that did something similar to what you needed. That source was (like still is) available is special SDK packages.
But things are getting better. Static analysis tools can track through the code and flag any path where memory leaks, memory overwrites, and such are unchecked. So, if you want to write bullet-proof code, you can. All you need to do take that static reports serious and spend the time to determine exactly what they are reporting.
Of course, if you don't want to do static analysis - hackers can do it later.
elias001 said:I still don't understand how did hackers get MS windows source code? Also, there are thousands of files that make up windows. How did they know which one to decompile and try to read those source code. Sorry if I am asking an incorrect question in the sense that what I am describing us not probably how is done by hackers.
Indeed; and you can be sure they will. But yeah, frequent checking and validating - to the point of paranoia (remember: you’re not paranoid if they really are out to get you!) - is good practice..Scott said:Linux had less of a problem because it was less of a target and because it had better review.
I believe the number one code vulnerability exploited by hackers is buffer overruns that are triggered by trusting that incoming data will be properly formed.
In general, trusting that any major interface will be used properly needs to be thought through. Even if the party on the other side of a library function call is internal and trusted (even if it is you), doing validation checks on the calls through that interface allows you to fence in any problems that you run into later. For example, if you write a trig library for yourself, you don't want arcsine(2) to result in a core dump, a memory leak, or any memory overwrite. It's not that you might turn evil on yourself - but if you make a mistake, you want to be able to track it down routinely.
Of course, if you have no reason to trust the other side of an interface, you need to be paranoid. If you receive a record over the internet that's 30 bytes containing a field that claims to be 3000 bytes long, don't malloc 30 bytes and then copy 3000 bytes into it. That may sounds bizarrely obvious, but that's exactly what hackers look for and find.
In a lot of cases, it's not the core operating system itself but software provided by app vendors or hardware vendors. In many cases, that software needs special OS access privileges - but the code is completely trusting - in fact, it a lot of cases it will only work under near perfect conditions. And it didn't help that Microsoft Windows started out with convoluted and poorly document hardware driver rules. MSDOS was simple enough. But from there through XP, the best source of documentation was often hunting for examples in the Microsoft source code to find code that did something similar to what you needed. That source was (like still is) available is special SDK packages.
But things are getting better. Static analysis tools can track through the code and flag any path where memory leaks, memory overwrites, and such are unchecked. So, if you want to write bullet-proof code, you can. All you need to do take that static reports serious and spend the time to determine exactly what they are reporting.
Of course, if you don't want to do static analysis - hackers can do it later.
Back in the late 80s or so, the college where I worked had a class that covered the basics of computing on PCs. One part of the class dealt with common uses of DOS (either MSDOS or PCDOS) for dealing with files and directories; e.g., copy files, delete files, etc. This was before Windows really started to take off. One skill that was taught was how to rename a directory.sbrothy said:I don’t know the particular case if there is one. Disassembling is one possibility
In fact, breaking in to a Windows box is as simple as moving the hdd to another computer and access it as a slave. No big deal (unless it’s encrypted of course.)Mark44 said:Back in the late 80s or so, the college where I worked had a class that covered the basics of computing on PCs. One part of the class dealt with common uses of DOS (either MSDOS or PCDOS) for dealing with files and directories; e.g., copy files, delete files, etc. This was before Windows really started to take off. One skill that was taught was how to rename a directory.
The procedure was as follows:
I had a copy of Norton Utilities, one utility of which was the capability of renaming directories. It seemed unrealistic to me that Norton would go through the procedure listed above, so I used a disassembler I had to look at a 32KB executable file that contained the directory rename code. At the time, DOS consisted of a bunch of external commands including the ones I listed above, as well as a lot of very low-level functionality that was available only to assembly language code. All of the low-level DOS commands used a specific assembly interrupt instruction, INT 21h in combination with certain register values to indicate which DOS functionality to execute.
- Create a new directory with the desired name using MD (short for make directory).
- Copy the files from the old directory using CP (short for copy).
- Delete the files from the old directory using DEL (short for delete).
- Delete the old directory using RD (short for remove directory).
With the disassembler I identified about 30 different places with INT 21h instructions, and looked at the register values just prior to the where the interrupts were executed. At one place I found that a low-level DOS instruction was used to rename a file, and that was the place where Norton Utilities was renaming a directory. It hadn't occurred to me before then, but after realizing that as far as DOS was concerned, the only difference between a file and a directory was a single bit set or not in the file attribute byte.
After discovering how Norton did things I was able to write my own utility, part in assembly, and part in C, that prompted the user for the names of the directory to change and the new name for the directory.
That access went out the window (pun intended) with Windows 95 and the change to a 32-bit code base. Programmers no longer had access to the INT 21h functionality.
elias001 said:I still don't understand how did hackers get MS windows source code?
Heh. “Close to the metal.”harborsparrow said:Old person here who wrote code in a gazillion languages including close to the metal. Elias001 is ahead of the game by learning C and assembler to understand precisely how memory is being used in a program. However, as programs historically grew more complex and larger, people found that C and C++ were unsafe to use because the coder can (deliberately if a hacker, inadvertently if tired or distracted) destructively overwrite memory in active use. It was also very difficult to adapt C and assembler programs for different OS's and instruction set architectures.
Newer languages tend to run on a virtual machine. Virtual machines have been tuned to a high level these days and have huge advantages over actual hardware. The VM also is a standard that remains stable over time just like an instruction set architecture. If you don't understand why VMs are important, I suggest you divert your questions about specific languages into that direction for a time and research it instead.
And yes, despite it's popularity, Python has several disadvantages as a programming language. You generally won't find people talking about them because authors of books snd Wikipedia pages about Python are generally fans and not always looking with completely clear eyes. To name some drawbacks: it is not standardized so that new releases can break existing code; it is not suitable for high performance applications; coders are given so much freedom that it is not always precisely clear how the program is using memory and it is possible for programs to behave unpredictable as a result (duck typing).