- #1

vead

- 92

- 0

look at this old post by mr. job

Basically you start off with some logic gates, like the AND, NOT and OR, for example.

Each gate receives one or more binary inputs (each either 0 or 1). The AND outputs a 1 if both its inputs are a 1 and a 0 otherwise. The OR outputs 1 if either input is a 1 and 0 otherwise. The NOT outputs a 1 if the input is 0, and 0 if it is 1.

With these gates you build components for adding numbers. Addition of binary numbers is the same as addition of regular (base 10) numbers, for example 6+2 is, using 16-bit numbers:

00000110

+00000010

------------

00001000

We can implement addition using the basic logic gates, to form an 8-bit adder, which adds two 8-bit numbers. The adder starts at the rightmost bits of both inputs and moves left, computing the sum and any carryovers for the next column, just like we do manually.

Once we have 8-bit adders, or X-bit adders, we can stack them, or cascade them for summing numbers of 16, 32, 64 bits, etc.

With addition implemented you can now think about implementing multiplication, and then division, there are some popular & clever algorithms for doing multiplication and division using basic gates.

Some other operations you might want to do are shifts, i.e. shift all bits right or left:

shift 001 left = 010

So you develop a collection of operations all acting on 1 or more inputs:

Division

Multiplication

Addition

Subtraction

Logical AND

Logical NOT

Logical OR

Logical XOR

Shift Left

Shift Right

...

Then once you have these operations in place you want to use them. So you want to bring inputs in and perform operations on them. The CPU has a number of registers which store inputs and outputs. For bringing in input from main memory and storing it in the registers, so that you perform operations on them, you add an operation from moving bits from main memory into the registers, that's another operation.

Now that we have so many operations, we want to tell the CPU what operation to do. So, we encode the operations in some bit pattern, for example:

0001 Perform NOT

0010 Perform AND

0011 Perform OR

0100 Perform Addition

0101 Perform Subtraction

0110 Perform Multiplication

0111 Perform Division

1000 Move bits from memory into registers

...

...etc

Now, suppose you want the CPU to perform addition of 6 and 2, you pass it in the operation code (0100), the number 6 (0110) and the number 2 (0010), so something like:

0100 0110 0010

Now we can start talking about a sequence of operations:

0100 0110 0010

01010 010 0010

0100 0110 0110

0010 1000 0010

1000 1010 0010

...

This Machine Langauge is hard to work with so we develop assembly languages, which compiles to Machine Language, so for adding two numbers maybe now we can do:

ADD $R1 2

its very great explanation by mr. job

I have little doubt

I understood how machine code generate but I did not understand how does assembly code make from machine code

I tried to understand with below example

machine language is hard to understand so we developed assembly language

Mov (1000)

6 (0110)

mov A, # 6 ( assembly language )

1000 0110 ( machine language )

another

Mov (1000)

2 (0010)

mov R1#2 ( assembly language )

1000 0010 ( machine language )

8=6+2

Add A, R1

0100 0110 0010

machine language

1000 0110

1000 0010

0100 0110 0010

assembly language

mov A, #6

mov R1,#2

add A,R1

I know assembly code convert into machine code but for low level understanding how we developed assembly language to understand machine language