# Memory location of C

1. Jun 14, 2014

### kidsasd987

if we declare variables with different type specifiers, is the memory location(address) assigned according to the type specifier or is that just random?

ex.

1. int variables are assigned location closer to each other or specific region of memory, and floats, double have different locations.

2. if 1 is wrong, the location is randomly assigned.

Thanks you.

2. Jun 14, 2014

### nsaspook

Where separate variables are stored in a language like C is implementation dependent. There are usually directives that can place variables in the memory address of your choice but even then the address in a system with virtual memory could be completely different from it's physical memory address on the hardware.

3. Jun 14, 2014

### phinds

That would make no sense since memory allocation is a run-time activity and variables come and go based not on what kind they are but on the calling order of the subroutines in which they are created. When the run-time code requests a memory block from the OS, it is just that ... a block of memory. The OS does not care what it is going to be used for.

All the variables for a subroutine, regardless of type, are created when that subroutine is called (unless they are declared static) and then if/when that sub calls another sub, the 2nd sub's variables are allocated memory. When a sub is exited, it tells the OS to de-allocate the memory it was given (although actual de-allocation may be delayed depending on the kind of garbage collection system being used.

4. Jun 14, 2014

### AlephZero

What post #2 and #3 said.

But on most types of computer, the hardware works faster if variables are "aligned" to start at preferred positions in memory. For example a variable that holds a 32-bit integer, that takes up 4 bytes of memory, will be allocated so its starting address is always a multiple of 4. 64-bit floating point variables might also start on multiples of 4 bytes, or on some systems it would be faster if they started on multiples of 8 bytes.

So, if your program allocates variables of different sizes, there may be small unused "holes" in memory between them.

That is not quite the same as your example 1, but it might be what gave you the idea.

5. Jun 14, 2014

### Staff: Mentor

A common arrangement is a "run-time stack" or "call stack." Suppose the following sequence of calls takes place:

Function A calls function B.
Function B calls function C.
Function C returns (to B).
Function B calls function D.
Function D returns (to B).
Function B returns (to A).
Function A calls function E.

Call the block of memory associated with each function (its variables, parameters, and other housekeeping information) A, B, C, etc. Then the stack grows and shrinks as follows, as the functions are called and return:

Code (Text):

C     D
B  B  B  B  B     E
A  A  A  A  A  A  A  A

http://en.wikipedia.org/wiki/Call_stack

Within each block, the variables and parameters might be arranged in the sequence in which the function declares them, possibly with "padding" as needed so each one starts on a 4-byte boundary (or whatever is most efficient for the processor).

6. Jun 14, 2014

### FactChecker

In modern computers the memory address of a static variable is just a convenient way to reference it. The data at that address is shifted into several levels of memory so that a small amount of very fast memory contains the data currently being used and a large amount of slower memory keeps data that has not been accessed for a while. There can be 5 levels of cash memory (5 different speeds) and main memory. The memory is divided into blocks that are moved together. Because memory is moved on blocks and speed is the top priority, data that is used close together in the code is put close together in memory. That way they end up in the fastest memory together. That is more important than the type of variable.

7. Jun 15, 2014

### D H

Staff Emeritus
Neither is correct.

The C memory model (2011 version of the standard) recognizes four storage durations for data: static, thread, automatic, and allocated. I'm going to ignore thread local data. For one thing, where that goes is highly implementation dependent. For another, your question is very basic. Threading is anything but a basic concept.

Static data are the variables you declare at file scope, the static variables you declare at function scope, and string constants such as the "Hello, world!" in const char * hw = "Hello, world!";. Automatic data include the arguments to and return value from functions, and non-static variables declared at file scope. Finally, allocated data are chunks of data created by malloc and released by free.

Where those three types of data "live" is implementation dependent, but most implementations use the following widely used practice. Static data are built into your executable at compile/link time. They are a part of your code. Automatic data live on the "call stack". Finally, allocated data is a part of the memory heap.

Note that none of these organize your data by data type.

8. Jun 15, 2014

### phinds

Yeah, that's how I understand it as well.

9. Jun 15, 2014

### phinds

By the way, kidsasd987, this is an EXCELLENT kind of question to be asking. I worry sometimes that many young programmers today have no idea how computers work. They don't really program the computers at all, they program a language that insulates them from the computer.

That works just fine as long as nothing goes wrong but debugging can be pretty much impossible if you don't really know what's going on. You are working on learning what's really going on, so keep it up !

Here's one for you, in pseudo code:

Code (Text):

declare A, B, and C as 32-bit floating point variables
declare D as a boolean variable
set A = 1.4
set B = 2.6
set C = A + B
set D = true if C = 4.0 and false if C is not = 4.0

Why in most computers does D turn out to be false? (Actually, it should always be false but I've been told that there are implementations where it comes out true although I've never experienced that myself)

Last edited: Jun 15, 2014
10. Jun 15, 2014

### FactChecker

It used to be a real problem that D was false. I am surprised that you see that happen these days. I have not seen the problem you describe for a very long time. The introduction of hidden bits and the later adoption of the IEEE 754 Floating Point Arithmetic standard has largely eliminated any worries about that. It is still in the programming standards to not compare floats for equality, but that is because not all processors have completely adopted IEEE 754 (although it is 30 years old).

11. Jun 15, 2014

### AlephZero

If you don't understand the basic fact that almost all floating point values are approximate, you shouldn't be writing software that uses floating point for anything serious IMO.

Relying on "hidden bits" to keep you safe is about as useful as relying on magic, or prayer.

See http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html (which was written by somebody who doesn't believe in magic, but does understand IEEE-754).

As for relying on standards as an alternative to believing in magic:
(from the above reference).

Last edited: Jun 15, 2014
12. Jun 15, 2014

### FactChecker

All modern military airplanes use floating point calculations in their flight controls.
After decades of telling new-hires that they should never compare floating point variables for equality, I began to realize that the comparisons that used to fail, no longer do. Then I tried examples that used to be a problem due to the binary representation of their fractional part. I could not find any failures. That is all I can say. If you have an example that still fails, I would like to see it. @phind's example above, run on my PC, gives the correct answer.

EDIT: Example was posted by @phind

Last edited: Jun 16, 2014
13. Jun 16, 2014

### D H

Staff Emeritus
Other than the implied leading one in normalized numbers, IEEE 754 has no "hidden bits". I suspect the "hidden bits" about which you are writing are the extra 16 bits in Intel's 80 bit floating point registers. Those 80 bit floating point numbers are not part of the floating point standard. Those "hidden bits" are lost as soon as you write those registers to memory.

Yep, they do. And any code that assumes that 1.6+2.4==4 is roundly rejected during peer review.

Code (Text):
// C or C++
#include <stdio.h>

int main () {
double x =  25.4 / 10.0  *  1.0 / 2.54 ;
double y = (25.4 / 10.0) * (1.0 / 2.54);
printf ("%g %s %g\n", x, ((x == y) ? "=" : "!="), y);
}
Code (Text):
# python
x =  25.4 / 10.0  *  1.0 / 2.54
y = (25.4 / 10.0) * (1.0 / 2.54)
if (x == y) :
comp = '='
else :
comp = '!='
print str(x) + ' ' + str(comp) + ' ' + str(y)
Code (Text):
# perl
$x = 25.4 / 10.0 * 1.0 / 2.54 ;$y = (25.4 / 10.0) * (1.0 / 2.54);
printf ("%g %s %g\n", $x, (($x == $y) ? "=" : "!="),$y);

Last edited: Jun 16, 2014
14. Jun 16, 2014

### FactChecker

Well, I stand corrected. Thanks! That is a very good example.

I don't want to hijack this thread, but is this because of the inaccuracy of storing the fractional part of intermediate calculations in binary numbers?

It's good to know that the standards are correct. As far as my misconception, I'll swallow my pride.

15. Jun 16, 2014

### phinds

The problem is always one of rounding error. If you had a decimal computer you would have the same problem with, just with a somewhat different set of numbers/calculations, so the problem isn't binary or any other number system, it's that computers have a finite #of "decimal places" / "binary places" whatever with which to represent numbers that cannot PRECISELY be represented that way. No decimal computer can precisely store the exact number 1/3, for example.

16. Jun 16, 2014

### .Scott

The issue of comparing floating point numbers has been well covered in the posts before.
I have no doubt that all military airplanes use floating point calculations for navigation and in some cases for actual control. However, I would be surprised if there were many instances where comparing for equality existed - even for an approximate comparison. For example, checking a numerator for zero before dividing ignores the fact that if the denominator is nearly zero, you will be saved from an arithmetic exception only to die from lack or precision. And so, what you need to do is determine the "other" method of making the calculation and deciding what region around zero is best handled using that alternative.
It would be the same for every other comparison I can think of. After all, this isn't a Math exercise, it's an Engineering exercise - where limits are maximums, minimums, or tolerances. And, if you are doing a Math exercise involving specific values of real numbers, you probably shouldn't be using floating point.

There is one example that will work, but which I would be very uncomfortable with. And if it occurred in a transportation system - such as a military aircraft - I would be alarmed. That would be this:
Code (Text):

#define VALUE_IS_LATE -20.0
#define VALUE_IS_MISSING -21.0

double fFuelLevel = VALUE_IS_MISSING

...

if(fFuelLevel == VALUE_IS_MISSING) { ... }

Assuming fuel level can never be negative, it will work - but will it always work? Even if the development environment changes?
A better way to do this would be to create a class that included the floating point number and a status flag as member variables.

Last edited: Jun 16, 2014
17. Jun 16, 2014

### Staff: Mentor

https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library
This provides decimal, not binary support for IEEE 754-2008 and is meant to meet the legal requirements for decimal floating point operations.

And C# supports a 128bit fp decimal datatype called Decimal.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx

Assuming these are libraries then they are likely not based on native decimal opcodes. The Sparc fujitsu M10 does have native decimal fp opcodes, and is therefore faster than a library implementation.
http://www.fujitsu.com/global/products/computing/servers/unix/sparc/concept/

So I think everyone posting here has a solid piece of a correct view of things, just not quite the same context....

I, too, would like to see a citation of some older (10+ years) decimal fp cpus in Aviation, rather than library implementations. Which is how I took the comments on Aviation computer architecture.

18. Jun 17, 2014

### FactChecker

You are exactly right. There is a lot of floating point calculations in current safety-critical flight controls, but the standards (MISRA and others) forbid checking for equality of floats. This is common sense in general since the odds are so small of two floats being exactly equal. My misconception was that I thought that IEEE 754 solved the problem when two simple mathematically identical values were compared. @D H's example is a perfect one to show that I was wrong. I will want to study it more. Thanks all for clearing up my misconception. I am afraid I have hijacked this thread. Sorry.