Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Questions about computer processors?

  1. Oct 5, 2016 #1
    1) If a computer has a 1 GHZ processor, does that mean it can process 1 billion bytes of data per second?

    2) If the answer was no to the first question, how does one determine how many bytes of data a computer can process per second?
     
  2. jcsd
  3. Oct 5, 2016 #2

    Simon Bridge

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    1. No - that means it runs on a clock that ticks 1 billion times per second - or that it has 1 billion machine cycles per second. How it uses the clock ticks to process data depends on the architecture. The clock speeds have traditionally been used to compare processors with similar architecture but these days things have got... complicated. Still, overclocking will speed up the processing.
    http://www.linfo.org/machine_cycle.html

    2. you ask the manufacturer - the unit for data processing, these days, is the "flop" or how many floating point operations a processor can do in one second.
     
  4. Oct 5, 2016 #3
    Is it possible to figure out how many bytes a computer can process per second by converting the gigahertz of a processor or FLOPS of a processor to the number of bytes a computer can process? As in, is there a formula that can be used to determine it?
     
  5. Oct 5, 2016 #4

    f95toli

    User Avatar
    Science Advisor
    Gold Member

    No, not in general since this will depend on the architecture.
    "bytes per second" is not a well defined for a real processor; especially not on modern CPUs which usually have multiple cores meaning they can do SOME (but no all) things in parallel.
     
  6. Oct 5, 2016 #5
    And often has to sit idle waiting for memory access or processor pipeline bubbles to "catch up".
     
  7. Oct 5, 2016 #6

    rbelli1

    User Avatar
    Gold Member

    If you have only one thing to do to a particular piece of data you will most likely bump up against your memory bandwidth first as glappkaeft discussed or if you have very large amounts you will bump up against storage or network bottlenecks.

    If you have lots to do to each piece of information you will do better. You still have the above bandwidth limits but you can take better advantage of the computing resources available. Then you can approach the calculation limit of the processor.

    Code analysis can calculate how much data can be processed per second. An easier method is to execute your application on the target hardware with a subset of representative data and measure the speed.

    BoB
     
  8. Oct 7, 2016 #7

    Simon Bridge

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    iirc. in the various overclocking projects, the limiting factor turned out to be the speed the ram could work at.
    This was hit before artifacts could build up significantly enough to impair the CPU effectiveness.
     
  9. Oct 7, 2016 #8
    The best you can do is use a benchmarking tool that tests how quickly processors can perform some task. The task would preferably be something that reflects a realistic use of CPU's. PassMark is a common tool that consumers use.

    One thing that Intel and AMD like to do is rollout a new large design then gradually shrink its size. This increases the clock speed. In the simple case where you have two chips with the same architecture but different speeds the relationship between their processing powers is straightforward.

    When you look at assembly code you notice that about half of the instructions tend to be data storage and retrieval. Memory speed is often the real bottleneck as everyone is pointing out.
     
  10. Oct 7, 2016 #9
    Bytes is not a useful measurement. In assembly SHL and DIV are both assemble to the same size, DIV takes an order of magnitude longer.
     
  11. Oct 10, 2016 #10
    I wanted to know how long it'd take a processor to process a series of images in a video recorder that is say, 1GB for example. Specifically to do with video image processing.
     
  12. Oct 10, 2016 #11
    So what's the best way to measure / calculate how many bytes of data a processor can process in video images? For example, suppose I had a video recorder with a series of images and all added up to 5GB of data, what's the best way to measure / calculate how fast the processor processes the 5GB of image data?
     
  13. Oct 10, 2016 #12

    jim mcnamara

    User Avatar

    Staff: Mentor

    Everyone is trying to tell you:
    There are limiting factors. A limiting factor is something that interferes with another component.
    Limiting factors
    1. I/O speed - I/O is usually many times slower than the cpu, say 10-50 times slower. SSD disks are a lot faster.
    2. program load time - how long it takes to get the correct program read from disk and "setup" ready to go in memory
    3. fetch time - even if all program code is in memory, there is still a requirement to get a small chunk of code into a special kind of fast memory - a "prefetch buffer"
    Example: you read 40000 bytes into a prefetch buffer expecting to run that code. But no!
    The program has a logical branch (if statement) that failed.
    So you are stuck loading other code from memory to fast prefetch memory. The cpu has to twiddle
    its thumbs while the memory load goes on.
    4. Resource contention - the OS controls which program gets to use what, and when that resource can be used. Wasted time. Extreme example of resource contention: your cell phone or windows on a PC starts an update. You cannot use the machine while the update gets installed and not until after the system reboots. Memory is an example of resource that is OS controlled.

    To do what you asked:
    Code (Text):

    A. free up as much resource as possible
      1. reconfigure the system to stop a lot of services - you have to know which ones to stop  I do not know a priori.
      2. Be sure the internet (ethernet cable) is not connected.
      3. reboot
    B. Setup
      1. make sure the data file is exactly 1 or 2 or 3 GB.  The larger the file the
          less program load time becomes a factor and the more disk I/O eats time instead.
      2. Be sure your setup will run as configured.
      3. You need program resource timing program - like Linux/UNIX time command.  
         There is one for windows, make sure it works.
    C. get data
      1. run your test, record time; reboot
      2.  perform #1 several times to be sure what results you are getting is not a fluke.
     
    Personally I do think you are going to do any of this, which is why you rely on test results. But you asked. The stopping a lot of services part often reveals resource hogs you did not know about. You know, the time you accidentally clicked on some kind of malware you never knew you had.

    Here is how to handle finding resource eating programs/services:
    https://support.microsoft.com/en-us/kb/331796
     
  14. Oct 10, 2016 #13

    f95toli

    User Avatar
    Science Advisor
    Gold Member

    You can't.
    The only way to get a rough idea is to look at benchmarks where people have done very similar. Even just changing the way you handle the FFTs (which you will be doing a LOT of if you are doing image processing) can dramatically change the time it takes to process one block of data. Memory access is -once again- likely to be the bottle neck.
    if you want predictability you are better off using a FPGA; but even then it would be quite complicated to predict timings; abut at least for FPGAs you might be able to use software for this,
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Questions about computer processors?
  1. Computer Processors (Replies: 5)

  2. Question about computers (Replies: 15)

Loading...