How do computers interpret binary code for text?

In summary: As has been said, it depends on the format. Digital logic is *typically* designed so that a high voltage on a wire (say, 5V) is interpreted as a 1 and zero volts on the line is interpreted as a 0. There are many, many variations to this, and you usually need a clock (which is...A clock is a periodic event that triggers a particular sequence of events to occur. Clocks are often used in digital systems to keep track of time or to produce a periodic output.In summary,1. Computers read binary using 1s and 0s to represent on and off.2. If I spell out "hello" in binary using lamps 0
  • #1
Steven Ellet
85
3
If I am too vague, please let me know
How do computers read binary?
I know that the 1s and 0s represents on and off.
if I spell out "hello" in binary using lamps 0=off and 1=on and I take that and try to feed that info to my pc, well I'm going to fail because my pc has no way of handling that input. Despite that fact, my pc can handle my hard drive, how?
 
Technology news on Phys.org
  • #2
Steven Ellet said:
If I am too vague, please let me know
How do computers read binary?
I know that the 1s and 0s represents on and off.
if I spell out "hello" in binary using lamps 0=off and 1=on and I take that and try to feed that info to my pc, well I'm going to fail because my pc has no way of handling that input. Despite that fact, my pc can handle my hard drive, how?

Your PC has digital interfaces that convert the digital signals into codes with the correct format for it to use.

The lamp could be the led transmitter of a remote audio device connected to the SPDIF input of a computer for digital audio with a light-link cable. The interface would take the off/on light signals and convert them to a format the CPU could use.
https://en.wikipedia.org/wiki/S/PDIF
 
  • #3
Letters are represented by numbers called character codes. When you type a letter on the keyboard the hardware converts the button press to a keyboard scancode which is then converted to a character code such as ascii or unicode based on where in the world you come from. Basically there are standards that define keyboards in the US, Europe and Asia which are different for each country.

http://www.computerhope.com/issues/ch001632.htm

http://homepage.cs.uri.edu/book/binary_data/binary_data.htm

https://en.wikipedia.org/wiki/ASCII

https://en.wikipedia.org/wiki/Unicode
 
  • #4
nsaspook said:
convert the digital signals into codes with the correct format
What is the "correct format?" and I was under the impression that all computing (at least on my pc) boiled down to 1s and 0s
 
  • #5
The correct format means the keyboard letter is converted to an 8-bit number in ASCII or a 16-bit number for Unicode or...
 
  • #6
Steven Ellet said:
What is the "correct format?" and I was under the impression that all computing (at least on my pc) boiled down to 1s and 0s

The correct format is how the 1s and 0s are arranged in different byte (8-bit, 16-bit, etc ...) sized codes the devices in the computer use. If the CPU is a 64-bit machine it normally would need data translated in the the correct word (usually several bytes wide) size for it to run effectively and a 32-bit device would need the data sized correctly for it to operate on effectively.
 
  • #7
nsaspook said:
how the 1s and 0s are arranged
So it remains fundamentally still ones and zeros, having said that, the CPU still need to understand the ones and zeros regardless of how it is "arranged"
.
 
  • #8
Steven Ellet said:
So it remains fundamentally still ones and zeros, having said that, the CPU still need to understand the ones and zeros regardless of how it is "arranged"
.

Correct but ones and zeros are just the Boolean logic levels the computer needs for its procedural operations that use Boolean expressions. The actual electrical level of ones and zeros could be just about anything with two stable states.
 
  • #9
nsaspook said:
Correct but ones and zeros are just the Boolean logic levels the computer needs for its procedural operations that use Boolean expressions. The actual electrical level of ones and zeros could be just about anything with two stable states.

Boolean logic means does x = y TRUE or FALSE
How does the pc read x and y?
 
  • #10
Steven Ellet said:
Boolean logic means does x = y Yes or No
How does the pc read x and y?

The x and y could be just about any binary code or even single bits in one byte of data where the Yes or No is the result of a Boolean equality operator on x and y.

How does the pc read x and y is a very broad question as the electrical and digital format of data from the platter and heads of a regular hard-drive to the logic gates inside the CPU involves many changes in electrical levels and data formats along the way.

For just applications programming you're normally not really interested in all that detail. The symbolic logic of your programming language and OS hide all those things from you.
 
  • #11
It would be helpful if you read some of the links I posted in post #3.

The second link talks about the various conversions that @nsaspook referred to in his post.
 
  • #12
Steven Ellet said:
Boolean logic means does x = y TRUE or FALSE
How does the pc read x and y?

As has been said, it depends on the format. Digital logic is *typically* designed so that a high voltage on a wire (say, 5V) is interpreted as a 1 and zero volts on the line is interpreted as a 0. There are many, many variations to this, and you usually need a clock (which is a wire that periodically switches from high to low voltage and back again) to tell the computer when to sample the wire to see if a 1 or 0 is being communicated.

You asked about hard drives. VERY simply, the hard drive has a magnetized read head that moves over a platter that looks a lot like a CD. At each spot, if the platter is magnetized the read head will spit out a bit of current. If it is not magnetized, the read head will spit out a different amount of current. These currents are interpreted by circuits in the hard drive controller to be 0 and 1 and then they are sent to the CPU.
 
Last edited:
  • #13
It might help to read up a little on assembly language. It is the next layer up from binary.

The 1s and 0s are interpreted in a specific way to produce ultra-simple commands whereby everything more complex is done by the computer.
A particular hex code (16 bits) will be read as a command like JMP, then another hex code provides a memory address.
Another hex code might be ADD, then another hex code pointing at another address.
This means basically, go to this address take the value you find there and add it to this other value at this other address.

This is (sort of) how a processor assembles all of its complex instructions.

It'll do dozens or hundreds of these to accomplish the simplest tasks like opening a file.
 
  • #14
analogdesign said:
You asked about hard drives. ... the hard drive has a magnetized read head that moves over a platter that looks a lot like a CD.
The read head is not magnitized itself, but it senses changes in magnetization direction that represent the bits on a disk as they pass nearby the head. The orientation of the magnetization was changed to perpendicular a few years ago to increase density. Wiki article:

http://en.wikipedia.org/wiki/Disk_read-and-write_head
 
  • #15
jedishrfu said:
Letters are represented by numbers called character codes. When you type a letter on the keyboard the hardware converts the button press to a keyboard scancode which is then converted to a character code such as ascii or unicode based on where in the world you come from. Basically there are standards that define keyboards in the US, Europe and Asia which are different for each country.

http://www.computerhope.com/issues/ch001632.htm

http://homepage.cs.uri.edu/book/binary_data/binary_data.htm

https://en.wikipedia.org/wiki/ASCII

https://en.wikipedia.org/wiki/Unicode

The following is my understanding of what a pc goes through to type "C" on a text doc based upon your second link
C=01000011=67
Input:'C'
computer reaction:
___________Step 1______I_________Step 2_________I________Step 3
Program: C__--________I________01000011_______I_________67
_____________l_________I____________l_____________I__________l
_____'C'=01000011___I______check database____I____check database
______________________I____________l_____________I__________l
______________________I_______01000011=67_____I____67=Display C
______________________I____________l______________I__________l
______________________I____return to program____I________Done
if i am wrong please provide updated diagram
 
  • #16
Steven Ellet said:
The following is my understanding of what a pc goes through to type "C" on a text doc based upon your second link
C=01000011=67
Input:'C'
computer reaction:
___________Step 1______I_________Step 2_________I________Step 3
Program: C__--________I________01000011_______I_________67
_____________l_________I____________l_____________I__________l
_____'C'=01000011___I______check database____I____check database
______________________I____________l_____________I__________l
______________________I_______01000011=67_____I____67=Display C
______________________I____________l______________I__________l
______________________I____return to program____I________Done
if i am wrong please provide updated diagram
It's much more involved than this. Just to write a single character to a file takes something on the order of 100 to 200 assembly instructions. In addition to translating a bit pattern (01000011) to a character ('C'), the program has to gather up information about the file to be written to and call into the operating system to actually carry out the write operation.
 
  • #18
Steven Ellet said:
The following is my understanding of what a pc goes through to type "C" on a text doc based upon your second link
C=01000011=67
Input:'C'

Your questions explain why a large percentage of people who do understand the tiny details and know that every action in a digital computer system is a precisely programmed and designed set of actions translated from our thoughts is unlikely to ever have true intelligence.
 
Last edited by a moderator:
  • #19
nsaspook said:
Your questions explain why a large percentage of people who do understand the tiny details and know that every action in a digital computer system is a precisely programmed and designed set of actions translated from our thoughts is unlikely to ever have true intelligence.
I tried to diagram this sentence, but was unable to do so. My diagram ended up looking like a Texas road map. :oldsurprised:
 
  • #20
Mark44 said:
I tried to diagram this sentence, but was unable to do so. My diagram ended up looking like a Texas road map. :oldsurprised:

Real Texas road maps show a straight line for a few hundred miles then a sharp left to a bar.
 
  • #21
Your CPU has no concept of letters, it's your program that tells the CPU how to interpret the letters from the binary. For example "hello" is C is not the same binary as "hello" in Pascal (most of it's the same, but the first and last characters are different.)

Most of the characters adhere to this standard: http://www.asciitable.com/ or this one https://en.wikipedia.org/wiki/UTF-8.
 

1. What is binary?

Binary is a numbering system that uses only two digits, 0 and 1, to represent all numbers and characters. It is the foundation of all modern computer systems and is used to store and process data.

2. Why is binary used in computers?

Computers use binary because it is a simple and efficient way to represent and process data. By using only two digits, electronic circuits can easily represent and manipulate information, making it easier for computers to perform complex calculations and tasks.

3. How does binary code work?

In binary, each digit represents a power of 2, with the rightmost digit representing 2^0, the next representing 2^1, and so on. By combining these digits, any number or character can be represented. For example, the binary number 0101 would represent the decimal number 5.

4. What is the difference between binary and decimal?

The main difference between binary and decimal is the number of digits used. Binary uses only 0 and 1, while decimal uses 0-9. This means that binary can represent more numbers with fewer digits, making it more efficient for computers to use.

5. How is binary used in programming?

In programming, binary is used to write and store instructions for a computer to follow. These instructions are translated into binary code, which is then executed by the computer's processor. This allows programmers to create complex programs and applications that can perform various tasks.

Similar threads

  • Programming and Computer Science
Replies
29
Views
2K
  • Programming and Computer Science
Replies
18
Views
2K
  • Programming and Computer Science
Replies
9
Views
2K
  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
Replies
4
Views
5K
  • Quantum Physics
Replies
14
Views
1K
  • Programming and Computer Science
Replies
7
Views
3K
Replies
14
Views
3K
  • Programming and Computer Science
Replies
6
Views
5K
Back
Top