Understanding 8-Digit Binary Numbers: A Beginner's Guide

  • Thread starter Thread starter luckis11
  • Start date Start date
AI Thread Summary
The discussion centers on how computers differentiate between bytes, specifically how they recognize the start and end of each byte in both hardware and software contexts. It is established that in hardware, bytes are distinguished by their fixed length of 8 bits, allowing systems like hard drives and RAM to identify byte boundaries based on bit positions. In software, bytes are typically the smallest unit of data handled, and while bits can theoretically be processed, practical applications usually deal with bytes directly. The conversation also touches on the nature of binary signals, questioning the interpretation of bits as electrical signals and the functioning of logic gates, particularly the NOT gate, which inverts signals. Overall, the thread seeks clarity on the fundamental concepts of binary data representation and signal processing in computing.
luckis11
Messages
272
Reaction score
2
How does it (it=what?) distinguish each 8-digit (10100100) from the previous and the next?
 
Computer science news on Phys.org
:confused:

I'm sorry, I don't understand your question.
 
Nor do I. Can you restate the question more clearly?
 
I think the question is:

How does the computer know when one byte ends and the next begins?
 
Yes, a byte.

I also asked what exactly distinguishes them, now I see that this means two questions?: What distinguishes the bytes at software and what at hardware.
 
I believe the only thing that distinguishes bytes in hardware is the fact that they are all 8 bits long. So the hard drive or RAM knows that if this is the 64th bit that it is the first bit in its byte.

I can't think of a situation where software would really see bits, only bytes. However, if it did it would do it the same way. Just assume that every byte begins with a multiple of 8 bit.
 
PLEASE forget my previous question. I want to grasp this:

The bits 0 and 1 are the whether signal passes from a gate or not, correct? It seems not correct: The gate NOT converts a signal "1" to a signal "0", whereas a signal always passes from that gate?

Is it (ONLY OR ALSO?) that the e.g. 101 means that at a wire (just a wire, no gates in between) there is an electrical pulse of (wavefront- no wavefront-wavefront)?
 
Last edited:
luckis11 said:
PLEASE forget my previous question. I want to grasp this:

The bits 0 and 1 are what?

The whether signal passes from a gate or not? This seems wrong because the gate NOT converts a signal "1" to a signal "0", whereas a signal always passes from that gate?

Is it that the e.g. 101 means that at a wire (just a wire, no gates in between) there is an electrical pulse of (wavefront- no wavefront-wavefront)? This also seems wrong because if it was so, the signal that arrives from the one wire to the gate should be 111111111..., and the other one should be 000000000...otherwise how could it be that...

? A link that explains this?
 
Back
Top