Why Do CPUs Handle Signed and Unsigned Arithmetic Differently?

In summary: It's a basic assembly language program that just prints out the alphabet.In summary, this person is looking for recommendations on books and online resources to help them learn C programming. They have some problems with the notes, but most of their issues are with the early part of the document that reviews the history of programming languages. They suggest looking for a book called "The C programming language" by Kernighan and Ritchie as a good starting point. They also suggest looking for online tutorials on Mycplus and Freeprogrammingresources.com.
  • #1
sponsoredwalk
533
5
I'm completely lost, I have an exam in C programming in a month & a half based off of this:
http://www.maths.tcd.ie/~odunlain/1261/prog.pdf
Basically I need some recommendations of books &/or online resources that follow the flow of those notes extremely closely but offer additional insight, different perspectives, more of an explanation of wtf is going on. My search thus far has found nothing, I just haven't a clue.
Really appreciate any help :cool:
 
Last edited by a moderator:
Technology news on Phys.org
  • #2
Hm... I skimmed through your PDF, and I think you are going to have a hard time finding another set of notes that are as randomly (dis)organized as that.

If you want a good book on C, get "The C programming language" by Kernighan and Ritchie. It doesn't follow the same "structure" as your course notes, but it will teach you C.
 
  • #3
Ugh. Doesn't that thing have an index somewhere so I can see what you actually have to learn? I hope you don't mind that I'm not very eager to go through the whole document before being able to help you out.
 
  • #5
Wow what a horrible document!

I skimmed through it, and I do think you will learn what you need to know from a good C book:

https://www.amazon.com/dp/0131103628/?tag=pfamazon01-20

There's some good tutorials that start right at the beginning here too:

http://www.cprogramming.com/

A few things I noticed in your notes that might not be on these free websites, things like floating point representation. There's a nice little tutorial on that here:

http://steve.hollasch.net/cgindex/coding/ieeefloat.html

Post up some specific questions if you have them :)

Have you got a compiler to use (for whatever your platform is - Linux, Windows, etc)?
 
  • #6
"The C programming language" by Kernighan and Ritchie.
I can't find my copy of this right now, and it's been years since I read it, but I seem to think it was more of a language reference than a tutorial (or maybe that was the first edition?).

I have some issues with that prog.pdf file, but most of my issues are with the early part of the document that reviews the history of programming languages.

machine language - in addition to being used on the earliest of computers, in the 1970's, small machine language programs were "toggled" into memory, and used to read a larger program from some device on the machine, in order to boot up the machine.

assembly language - the example given was generated by a C compiler and not typical of the type of program a human would write. Assembly language is still used for small parts of operating systems that deal with processor specific instructions needed for multi-threading and interrupt handling. High level assembly language (ALC, HLASM) is still used on IBM mainframes for parts of business applications, although there is an effort to convert most of that assembly language to Cobol.

Fortran - mostly used because there is a large amount of existing code written in Fortran that would take a long time to convert to another language, and because of that, some super computers makers have put more effort into optimizing Fortan language, including some processor specific laungauge extensions. A sort of self-perpetuating cycle.

Cobol - still used in a lot of business environments, such as banking.

However all of the non-C language stuff is just background stuff and probably won't get used in your class.

- - -

"smallest piece of data in C is a byte" - C supports bit fields in structures.

Most current PC's support 64 bit integers and 64 bit pointers if you use a 64 bit compatable operating system.

Intel processors also support 80 bit (10 byte) floating point numbers in hardware registers, but it's not common to see this data type supported in current C compilers.

"arrays and pointers ... starting address" - C treats the name of an array as the starting address of an array, but the starting address of an array is not stored in the array (it's stored in a symbol table during compilation, and ends up in the program code, but not in the data for an array).

"char * argv[] ... array of strings" - argv is an array of pointers to characters. Each pointer points to the first character of a string of characters.

"operator precedence - &" The binary operators, & ^ | are very low precedence, lower than logical (&&, ||), or comparator (==, !=, ...) operators. Some consider this a poor design decision for the C language as it generally requires parenthesis that wouldn't be needed otherwise.
 
Last edited:
  • #7
rcgldr said:
However all of the non-C language stuff is just background stuff and probably won't get used in your class.

... unless he's the sort of lecturer who tests that you have learned everything in the notes verbatim, including all the mistakes :devil:

Incidentally, the "fortran program" in the introduction is not written in any version of fortran that I've seen or used in the past several decades. It certainly doesn't comply with any of the standards like Fortran II, IV, 77, 90, 95, etc.
 
  • #8
I tried to go through the document one more time, but it truly is on of the most terrible things I've ever read. If I were you, and I wanted to keep as close to the flow (for lack of a better term) of his notes as possible, I would simply look things up on the internet for every heading.

For example, he first talks about 'programming languages'. I would then search for programming languages on the internet to get a general idea of what they're for, what programming languages are widely used, etc. (Although this might not be a good example, as you can most likely skip the first four or so chapters.)

I often used this method when I had to use an unintelligible book.
 
  • #9
Thanks a lot guys, as you can imagine most of us haven't a clue what's going on with C.
My main problem with C is that I haven't a clue what's going on, in other words it's like being given an integral table & being told that whenever you come across any derivative/integral/power series etc... in physics refer to this table, no need to learn wtf is going on from the fundamentals...
I think it's really a perfect analogy, I tried to learn physics only to be forced to learn mathematical methods for dealing with physical concepts, then learning this math forces you to learn calculus, which forces you to learn real analysis, which forces you to learn axiomatic set theory (it did for me anyway).
I hardly want to go that deep into computing but I mean I don't even conceptually know wtf is going on, what's a compiler? I would have thought having some program like a compiler is cheating in that programming is supposed to design those things? :uhh:
If not, why not? If C is something that uses pre-determined programs like compilers then wtf is C, & where do the other things come from? What are their limits? What are C's limits?
The main problem here is that I can't even adequately phrase the issues I have with C, all I know is that I need to understand a computer in stages from the moment the power button is turned on & I need to understand where C is in this heirarchy, with say Windows/Ubuntu as the top of the heirarchy & a computer with power & nothing else as the bottom of the heirarchy :redface:
Hopefully these aren't too stupid as questions but this is how badly I understand C compared to the rest of mathematics...
 
  • #10
Wow, wow, wow. Just stop there for a moment, my friend. You're definitely overanalyzing this. What you need to learn for this course is the C programming language; compilers and linkers are the tools you will be using to convert the code you write into something your computer can understand and use. Thus, using a compiler+linker is no more 'cheating' than using a calculator is cheating when doing a physics test: the point is not to find out how calculators work, but to do physics. Likewise, the point of this course is to learn C, not how the internals of a computer work.
 
  • #11
rcgldr said:
I can't find my copy of this right now, and it's been years since I read it, but I seem to think it was more of a language reference than a tutorial (or maybe that was the first edition?).
That's my sense, as well (on lang ref vs. tutorial). I have the 2nd and 3rd editions, and they have some exercises, but K & R is a terse, bare-bones presentation of C. There are many other books out there that go into much more detail, although most cover C++ these days.

Another good reference that isn't as wellknown as K & R is "C: A Reference Manual," by Harbison and Steele.
 
  • #12
sponsoredwalk said:
Thanks a lot guys, as you can imagine most of us haven't a clue what's going on with C.
Most people can use a calculator without understanding it's inner workings. C and a computer is similar, except you have a programmable calculator without an interactive mode. For some, it might be easier to start off with classic Basic, or (modern) Python, since these have an interactive mode, but only spending a day or so to get a sense of programming as opposed to learning yet another language.

You'd want to start off with the simplest of programs, which is what most tutorial books or online websites do. Using a source level debugger will help quite a bit, as these will show what is happening step by step. If curious about how the computer works, source level debuggers usually include a assembly window option where you can follow the machine language step by step.

The class document mentions gcc (a particular compiler), so are you learning this on a Unix type system as opposed to Windows?
 
  • #13
Thanks for the replies - I think we've already made progress:
If C is just a language then I guess you could distill my problems into the more general question of where a language fits into the overall scheme of a computer if thought of in stages from a computer with power & a circuit board to a computer with windows/ubuntu.

If compilers & linkers are a way to convert code into something a computer can understand, what is a compiler interacting with to make things work?

As for the gcc question, I feel it's completely useless to begin writing programs until I first know what's going on but once I get there I can use either windows (my laptop) or unix (college). I've written them in college but I mean it was all complete nonsense to me.

Just as with a calculator there is a certain amount of mystery when you don't understand a taylor series or approximation techniques &, with understanding, you are able to go from fearful reliance on a calculator to viewing it as merely a speeding up process, so too do I just need to sort all the nonsense in my head about what's going on before I go off learning C, it's the background stuff I need to establish first. It really feels like trying to learn Ancient Greek without knowing what language is.
 
  • #14
sponsoredwalk said:
If compilers & linkers are a way to convert code into something a computer can understand, what is a compiler interacting with to make things work?
The compiler translates your code into object code - machine code that the computer can process. Another step in the process of producing an executable program is linking, in which calls to library functions (e.g., printf, scanf, malloc, pow, and so on) get matched (or linked) to the actual code in the libraries, and the library code is written into the executable. This is somewhat dated, as many programs these days don't actually include the library code, but instead of using static libraries, use dynamic link libraries (or DLLs). This is certainly the way it works in Windows programming, and there might be a counterpart in Unix/Linux programming.

When the code runs, it interacts with the computer operating system
sponsoredwalk said:
Just as with a calculator there is a certain amount of mystery when you don't understand a taylor series or approximation techniques &, with understanding, you are able to go from fearful reliance on a calculator to viewing it as merely a speeding up process, so too do I just need to sort all the nonsense in my head about what's going on before I go off learning C, it's the background stuff I need to establish first. It really feels like trying to learn Ancient Greek without knowing what language is.
I would advise jumping in sooner rather than later. You don't really need a deep understanding of how the computer works to be able to write code, just a fair understanding of some basic ideas of input and output, flow control using if ... else if ... branching and for/while loops.
 
  • #15
sponsoredwalk said:
It really feels like trying to learn Ancient Greek without knowing what language is.

Wel, you learned your native language without knowing what "language" was (and even before you could read, write, or speak!)

The main reason for progamming computers in any high level language is so that you don't need to know how they work. (But possibly the person who wrote the first sections of your course notes doesn't realize this.) All you need is some basic ideas like
* there is a memory, and you can access data in memory by inventing names for parts of it and defining what sort of data each name refers to. That's what statements like "int i;" or "double xyz[3];" do. The programming language knows about some basic data types like integers, real numbers, and character strings. You can also define your own data types to represent more complicated "things" (but you don't need to know about that to get started).
* You can do calculations on the data by writing expressions that look similar to math notation, for example "x = (- b + sqrt(b^b - 4*a*c)/(2*a);"
* You can control "what happens next" by constructs like "if" statements, and make sectiions of the program repeat (loop) with "do while(...)", "for ...", etc.
* To store information permanently there are "files" which you can "open", "close", "read" and "write". In C (and many other languages), the computer keyboard and display screen are just special types of file (though trying to "write" to the keyboard or "read" from the screen isn't likely to do anything useful).

That's probably enough knowledge about "how computers work" to get started on programming. The best way to learn it is by doing it, not reading about it.
 
  • #16
Mark44 said:
Another step in the process of producing an executable program is linking, in which calls to library functions (e.g., printf, scanf, malloc, pow, and so on) get matched (or linked) to the actual code in the libraries, and the library code is written into the executable. This is somewhat dated, as many programs these days don't actually include the library code, but instead of using static libraries, use dynamic link libraries (or DLLs). This is certainly the way it works in Windows programming, and there might be a counterpart in Unix/Linux programming.

This is not entirely true. The object file created by a compiler doesn't actually contain machine code. Rather, it contains symbols (defined, undefined and local - but let's not get too deep into detail here). The linker combines all this in a unified single executable file, and resolves all the symbols. DLLs allow dynamic linking, which means there are still undefined symbols in the executable that get resolved when the program run, instead of in the linking process itself. It's true that dynamic linking is used more and more often, but there's still linking involved in the actual creation of said executable. (If that's what you meant, then I'm sorry. Although maybe this clarifies things a bit for others. :smile:)

And yes, there's something similar to DLLs in Linux (although how it's used is defined less sharply, and I suspect non-ELF executables might use a different format). These are called SO files, which means, if I remember correctly, 'shared object'.
 
  • #17
Hobin said:
This is not entirely true. The object file created by a compiler doesn't actually contain machine code. Rather, it contains symbols (defined, undefined and local - but let's not get too deep into detail here).
I disagree. The object file has to contain at least some machine code from statements in the source code such as assignment statements, loops, etc. You're right about the object code also containing placeholders for symbols that aren't defined in the source code (such as library functions and the like), but I too didn't want to get too deep into the explanation.
Hobin said:
The linker combines all this in a unified single executable file, and resolves all the symbols. DLLs allow dynamic linking, which means there are still undefined symbols in the executable that get resolved when the program run, instead of in the linking process itself. It's true that dynamic linking is used more and more often, but there's still linking involved in the actual creation of said executable. (If that's what you meant, then I'm sorry. Although maybe this clarifies things a bit for others. :smile:)

And yes, there's something similar to DLLs in Linux (although how it's used is defined less sharply, and I suspect non-ELF executables might use a different format). These are called SO files, which means, if I remember correctly, 'shared object'.
 
  • #18
Mark44 said:
I disagree. The object file has to contain at least some machine code from statements in the source code such as assignment statements, loops, etc. You're right about the object code also containing placeholders for symbols that aren't defined in the source code (such as library functions and the like), but I too didn't want to get too deep into the explanation.

I stand corrected. I thought that because the code still has to go through a linker, it would not contain any machine code yet (only optimized C code or assembly, for example). Just looked it up, and I was wrong about that. :blushing:
 
  • #19
Hi everyone, so I'm giving C a shot again & I need to know everything about the basic arithmetic of the subject but have been having a lot of trouble getting to grips with it if you could spare a few minutes.

Here are a bunch of questions I need to be able to answer before actually getting to hello world:
Convert -1054 and 1307 to 2s complement short integers (in hexadecimal)
Calculate the single-precision floating point form of -136:0=9, little endian.
Convert 23456 to short integer
(That is, convert to hexadecimal and pad to 4 hex digits if necessary, Give `big endian' answers. Little endian is confusing.
Convert -4567 to short integer).
Again converting to short int, calculate 23456 + 23456 as short integers.
Convert the answer to decimal (the answer will be negative).
Convert 5675 to hexadecimal.

Translate the following ASCII characters (given in decimal) into English
065 032 113 117 105 099 107 032 098 114 111 119 110 010 000
(The `000' shows the end of the character string. Hexadecimal 41 is 'A', hexadecimal 61 is 'a', hexadecimal 0a is newline or carriage return 'nn', and hexadecimal 20 is a blank.)
Calculate the single-precision floating-point representation of -320=17. Give the answer little endian, in hex. Do the same with +5=136.
Calculate the double precision representation of 5=136.
Converting decimal to short
Floating point numbers
short integer range
2s-complement form

The reason I can't answer them yet is because I haven't fully understood everything.

As I understand it, everything falls out of the following information I've gathered:

Four Fundamental Data Types:
int -
char
float
double

Two Specifiers of Data Types:
Short/Long
Signed/Unsigned

int:
Bit Size 8
Byte Size:1
Unsigned Range: 0 - 255
Signed Range: 128 to 127char:
Bit Size 16
Byte Size:2
Unsigned Range: 0 - 55635
Signed Range: [-32768,32767]

This is the first bit of structure I've been able to pin down so hopefully I can develop the subject along these lines, however from my notes:
Next is short (short integer). In our system this appears to be two bytes with 65536 different values.
Next is int (integer). In our system this appears to be the same as long.
Next is long. In our system this appears to be 4 bytes. Therefore the range is from -2147483648 to 2147483647. About 2 billion.

He seems to be mixing everything together as a big glop of facts, & using different names too... Is there a way to make sense of everything I've written along with this quote & better yet am I on the right lines & can I make sense of the questions in the opening quote by continuing down these lines? [links appreciated]

Even better, only if you have the patience, is to think along the structured lines I'm trying to develop & to see how my approach can be used to deal with the material in pages 7 to 17 of http://www.maths.tcd.ie/~odunlain/1261/prog.pdf .

Thanks :cool:
 
Last edited by a moderator:
  • #20
int:
Bit Size 8
Byte Size:1
Unsigned Range: 0 - 255
Signed Range: 128 to 127


char:
Bit Size 16
Byte Size:2
Unsigned Range: 0 - 55635
Signed Range: [-32768,32767]

This is reversed - int is and was always larger than char. But even after switching char and int in the quote it doesn't get entirely correct.

Trick is, size of an int is architecture and compiler dependent. When the processors were 16 bits (times of 8086) int was usually 16 bits, long int was 32 bits. When the processors became 32 bits int was 32 bits, long int was 64 bits (unless you were still working in some kind of 16 bit DOS box or something like that, or unless long int was assumed to be 32 bits as well by whoever designed the compiler). In todays architectures int can be 64 bits, so long int would be 128 bits - or not.

I have a feeling I remember reading about machines with even other sizes of ints (like 12 bits), which doesn't make things easier. My suggestion is to follow the information given by your prof, as he most likely refers to the system you will be working on. Note he states "in our system it appears to be" - which is a subtle reference to the mess I described.
 
  • #21
Borek said:
I have a feeling I remember reading about machines with even other sizes of ints (like 12 bits), which doesn't make things easier.
Yes. Nowadays the term "byte" is fairly standard at 8 bits, but back some years ago, different archtectures used bytes of different sizes. If memory serves, the Digital PDP-8 had 6-bit bytes and 12-bit words (which could store an int).
Borek said:
My suggestion is to follow the information given by your prof, as he most likely refers to the system you will be working on. Note he states "in our system it appears to be" - which is a subtle reference to the mess I described.

sponsoredwalk,
Many of the things you're asking about, such as 2's complement, can be found in wikipedia or by an internet search, if not in your course materials. Rather than posting a fairly long list of things that you are uncertain about, it's better to focus on one or two items at a time.
 
  • #22
Borek said:
I have a feeling I remember reading about machines with even other sizes of ints (like 12 bits), which doesn't make things easier.

There were also "word-addressible" machines, for example the early Cray supercomputers (and other high performance number-crunching machines from the 1970s and 80s) where memory addresses referred to floating point words, not bytes. On the first Cray C compiler, sizeof(char), sizeof(long), and sizeof everything else all returned the same value 1, or one 64-bit word.

There is a more general point here, which is that your DON'T "need to be able to answer" these questions "before actually getting to hello world". All you need to know about a char to get started is that it is big enough to hold a character. The time to bother about the details is after you have actually done some programming IMO. If you write a program to print a table of factorials of integers and it starts printing nonsense when the factorials get bigger than about 1,000,000,000, then you have a reason to want to know why, and what to do about it - whch is a better learniing situation that trying to memorize a collection of facts without any context, before you think you can start.
 
  • #23
AlephZero said:
There is a more general point here, which is that your DON'T "need to be able to answer" these questions "before actually getting to hello world". All you need to know about a char to get started is that it is big enough to hold a character. The time to bother about the details is after you have actually done some programming IMO. If you write a program to print a table of factorials of integers and it starts printing nonsense when the factorials get bigger than about 1,000,000,000, then you have a reason to want to know why, and what to do about it - whch is a better learniing situation that trying to memorize a collection of facts without any context, before you think you can start.

I agree completely. sponsoredwalk, from your posts in this thread, it seems that you believe that you must completely understand all facets of the hardware before you can attempt to write a simple program. A more productive path is to start poking around some simple programs until you understand why they produce the output they produce. After that, try modifying the code. Then see if you can write some of your own code to solve a simple problem.
 
  • #24
I don't mean to sound like a lunatic but you're talking to someone who had to go to the depths of topology & axiomatic set theory in order to be happy with introductory real analysis properly (and just won't feel right until I've gotten through all of Bourbaki), I'm just not able to deal with memorizing a bunch of seemingly arbitrary rules without good reason as a matter of practicality - it's literally a matter of practicality that I can't...

In general I'd love to be able to poke around with this stuff & get used to it like you guys did but that procedure completely failed while simultaneously being taught by a lecturer for the reasons I've mentioned.

As for the arithmetic stuff on wikipedia, I know it's scattered about on wikipedia but when you are trying to piece it all together & haven't a clue what you're looking for & are unable to get answers to all the extra questions that pop up then it's pretty much useless (as it has been). I may not be learning C the right way by focusing on all of the arithmetic side of things but if you look in the notes that's what he expects (there'll be a full question on this material & rather than memorize solutions I posted here to try to understand this apparent nonsense, questions like why you are doing signed arithmetic are things I haven't found on wikipedia or from anyone in my course yet - hence posting here...) so I'm forced to do it. I'd rather try to appreciate the logic behind it but I mean as I've said I'm hitting brick walls continually, haven't a clue what the overall idea is & am just trying to get some guidance direct to the problem.

Some immediate questions are why you need to do signed arithmetic? Where is the logic behind signed arithmetic? You mean there are more than just a 2s complement? 16s complement? 15s? 1? Do I really need to memorize the entire ASCII alphabet? Is there not some logic behind predicting entries from it? What is the best list of sizes of data types that explains the arbitrary choices he's made for defining sizes, & how can I make the arithmetic fit? I haven't been able to find answers to these on wikipedia thus far...

To sum it all up, is there actually a resource that will provide answers to all of the questions I've posted, both on the course and conceptually, or something extremely near it, or should I resort to memorization until the exam is over & banish C programming from my life forever?
 
  • #25
Just noticed this thread. Tell you what, ever used the Microsoft C++ debugger? I'm just sayin' the debugger is a good way to learn lots of things in the language. I took C++ in school. Instructor gave a really tough assignment we had to code on the University system. I moved it onto my PC under Microsoft Visual C++ and after running it though the debugger over many hours was one of a few in class to do the assignment correctly.

Just another option to think about if you want to excell in C++ in my opinion.
 
  • #26
sponsoredwalk said:
I don't mean to sound like a lunatic but you're talking to someone who had to go to the depths of topology & axiomatic set theory in order to be happy with introductory real analysis properly (and just won't feel right until I've gotten through all of Bourbaki), I'm just not able to deal with memorizing a bunch of seemingly arbitrary rules without good reason as a matter of practicality - it's literally a matter of practicality that I can't...

In general I'd love to be able to poke around with this stuff & get used to it like you guys did but that procedure completely failed while simultaneously being taught by a lecturer for the reasons I've mentioned.
Real analysis and C programming are two very different disciplines, IMO, so a strategy that worked for you in analysis might not be the best course of action for programming.

Your first post in this thread was almost two months ago. If you have been in this class for that long and still have yet to write your first program, that's worrisome to me. It seems reasonable that you want to understand the basics before getting started, but taking so long to "get your hands dirty" suggests a fear of doing so.
sponsoredwalk said:
As for the arithmetic stuff on wikipedia, I know it's scattered about on wikipedia but when you are trying to piece it all together & haven't a clue what you're looking for & are unable to get answers to all the extra questions that pop up then it's pretty much useless (as it has been). I may not be learning C the right way by focusing on all of the arithmetic side of things but if you look in the notes that's what he expects (there'll be a full question on this material & rather than memorize solutions I posted here to try to understand this apparent nonsense, questions like why you are doing signed arithmetic are things I haven't found on wikipedia or from anyone in my course yet - hence posting here...) so I'm forced to do it. I'd rather try to appreciate the logic behind it but I mean as I've said I'm hitting brick walls continually, haven't a clue what the overall idea is & am just trying to get some guidance direct to the problem.
The best way to get a handle on this, IMO, is to take jackmell's advice and get familiar with a debugger. This will give you a view into the CPU so you can see the registers and memory, and see what happens for each line of code.

C and C++ have a rich set of types to use. For example, in the integral (no fractional part) types there are char, short, int, and long. Each of these, including char, has both signed and unsigned variants. The "signed" vs. "unsigned" parts determine the range of numbers that can be stored in a variable of that type.

Assuming that a char is 8 bits, an unsigned char can hold numbers in the range [0..255]. A signed char can hold numbers in the range [-128..127].

Since about the mid-90s, the int type has been 32 bits (it was 16 bits before that, corresponding to the size of registers on most personal computers). An unsigned int can hold 232 different values, from 0 though 232 - 1 (= 4,294,967,295). The signed int can hold numbers in the range -2,147,483,648..2,147,483,647.

An advantage of unsigned numbers are that you can work with numbers twice as large for the same number of bits. A disadvantage of unsigned numbers is that you don't have negative numbers if you need them.

sponsoredwalk said:
Some immediate questions are why you need to do signed arithmetic?
If you have signed numbers, you can eliminate the need for the subtraction operation in the processor, which can reduce the amount of microcode that is needed in the CPU. I believe this is the reason. For example, instead of calculating a - b, you can do this addition: a + (-b).
sponsoredwalk said:
Where is the logic behind signed arithmetic? You mean there are more than just a 2s complement? 16s complement? 15s? 1?
I know of only two: 1's complement and 2's complement.
sponsoredwalk said:
Do I really need to memorize the entire ASCII alphabet?
No, but I have found that it's helpful to have a few ASCII codes memorized. 'A' = 65, 'B' = 66, and so on. 'a' = 97, 'b' = 98, and so on. Each lower case letter's ASCII code is 32 higher than its upper case counterpart. '0' (zero) = 48, '1' = 49, and so on.
sponsoredwalk said:
Is there not some logic behind predicting entries from it? What is the best list of sizes of data types that explains the arbitrary choices he's made for defining sizes, & how can I make the arithmetic fit? I haven't been able to find answers to these on wikipedia thus far...
The sizes of data are pretty much controlled by the register sizes, and will almost always be a power of 2: 8 bits, 16 bits, 32 bits, 64 bits.


sponsoredwalk said:
To sum it all up, is there actually a resource that will provide answers to all of the questions I've posted, both on the course and conceptually, or something extremely near it, or should I resort to memorization until the exam is over & banish C programming from my life forever?
 
  • #27
sponsoredwalk said:
Some immediate questions are why you need to do signed arithmetic? Where is the logic behind signed arithmetic?
For a 2's complement computer (almost all of them are 2's complement these days), doing add's or subtracts with integer numbers, the cpu doesn't know or care if the numbers are signed or unsigned, and always performs the same operation. The cpu always sets or clears an overflow status bit assuming the numbers are signed,. and always sets or clears a carry status bit assuming the numbers are unsigned.

Most cpus have separate instructions for signed and unsigned multiply, and for signed and unsigned divide.

Almost all floating point operations assumed signed numbers. Most floating point formats use a sign bit, and do not use a 2's complement like format.

sponsoredwalk said:
You mean there are more than just a 2s complement?
There's 1's complement math, but it's rarely used these days.

http://en.wikipedia.org/wiki/Ones'_complement

sponsoredwalk said:
What is the best list of sizes of data types that explains the arbitrary choices he's made for defining sizes?
Depends on the cpu, the system, and the default sizes for the compiler. The current standard for Microsoft Visual C/C++ is that "int" means 32 bits, even on a 64 bit cpu. Other enviroments may have different standards.
 
Last edited:

What is "C Programming Nightmare"?

"C Programming Nightmare" is a popular coding challenge or puzzle that involves creating a complex program in the C programming language. It is often used as a test of a programmer's skills and problem-solving abilities.

Why is "C Programming Nightmare" considered difficult?

The challenge involves creating a program that can perform a specific task or solve a problem using the C programming language, which is known for its low-level and complex syntax. It requires a deep understanding of the language and the ability to think critically and logically.

Who can participate in "C Programming Nightmare"?

Anyone with a strong background in C programming can participate in "C Programming Nightmare". It is often used as a test for students, job applicants, and experienced programmers to showcase their skills and knowledge.

What are the benefits of participating in "C Programming Nightmare"?

Participating in "C Programming Nightmare" can help improve a programmer's problem-solving abilities, as well as deepen their understanding of the C programming language. It can also serve as a valuable learning opportunity and a way to challenge oneself.

Where can I find "C Programming Nightmare" challenges?

"C Programming Nightmare" challenges can be found online, on coding challenge websites, or through programming communities and forums. They may also be offered as part of coding competitions or job interviews.

Similar threads

  • Aerospace Engineering
Replies
2
Views
7K
  • STEM Academic Advising
Replies
13
Views
4K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
2K
Back
Top