Accuracy of 8bits vs 16bits vs 32bits microcontrollers

  • Thread starter Femme_physics
  • Start date
  • Tags
    Accuracy
In summary: Accuracy is the degree of conformity and precision is the degree of refinement.In summary, the difference between an 8-bit, 16-bit, or 32-bit microcontroller does not necessarily determine the accuracy of measurements. The bit number relates to the resolution of the ADC, and other factors such as frequency and noise can affect precision. Accuracy and precision are two separate concepts, with accuracy referring to the conformity of measurements and precision referring to the reproducibility.
  • #1

Femme_physics

Gold Member
2,550
1
Suppuse I'm making a measuring device of some sort that gets its feed from a microcontroller, would there be a huge difference between 8bits, 16bits, and 32bits microcontroller in terms of the device capability to measure accurately? Like, I imagine, the 8bits could only measure jumps of 0.5mm, the 16bits jumps of 0.1mm and the 32bits 0.05mm. Is that right?

Or will the difference be on a more microscopic scale?

I guess I'm more looking to see how big is the difference in terms of accuracy of measurements.
 
Engineering news on Phys.org
  • #2
Perhaps I am missing something or my English fails me, but I don't see how number of bits in the microcontroller is related to the accuracy. 8 bit microcontroller should be perfectly capable of dealing with 32-bit integers. It may require some additional coding, but there is nothing that could stop it from dealing with any accuracy.

I wonder if you really asked the question you wanted to ask :wink:
 
  • #3
Hm, well I'll try to be clearer :)

Let's say I have a potentiometer and I connect it to a development board with an 8bits microcontroller. If I connect the potentiometer to a development board with a 32bits microcontroller wouldn't it be much more accurate? After all, 8 bits microcontroller can only read that potentiometer in terms of 0-255, whereas a 32bits one could read it in terms of 0-4294967295.
 
  • #4
Femme_physics said:
8 bits microcontroller can only read that potentiometer in terms of 0-255, whereas a 32bits one could read it in terms of 0-4294967295.

There is nothing that stops the 8-bit microcontroller from reading a 32 bit number as 4 consecutive bytes. Could be these things are not implemented this way, but as I explained earlier, number of bits in the microcontroller doesn't prevent it from dealing with larger numbers.
 
  • #5
Really? Interesting... that explains why in none of the many articles I read on this issue, accuracy wasn't mentioned. All that was mentioned in terms of the 32bits advantages is:

High operating frequency
Low active power requirement
Code reuse
Code size reduction by40-50 percent
Enhanced features for increasingly complex algorithms
Cost reduction due to device aggregation
External connectivity - Ethernet, USB, CAN
Efficient real-time operations
Availability of tools.

I guess that's indeed all.

Thanks for the answer :-)
 
  • #6
I think you are getting confused with 2 different things here

what you are describing is really to do with A to D conversion which is different to the bit number of a uPC/MCU

I'm no expert on A-D theory tho I use them in may of my projects ... eg my digital seismograph ( earthquake recorder)
The original 8 bit A/D unit had that 0-255 range, my newer one is a 16bit A/D ... for me it translates to the fact I can record a larger (wider) range input signal before the max's out and overloads the A/D chip

This, as Borek has mentioned has nothing to do with the bit count for a micro processor

cheers
Dave
 
  • #7
So it's the difference between an 8bits ADC, 16 bits ADC or 32 bits ADC that determines the accuracy I was talking about?

Or, can you also just increase the amount of code and get the same accuracy?
 
  • #8
Femme_physics said:
So it's the difference between an 8bits ADC, 16 bits ADC or 32 bits ADC that determines the accuracy I was talking about?

Looks like. That's why I wrote in my very first post

I wonder if you really asked the question you wanted to ask

But I don't know how the ADC communicates with microcontroller - it can send 32 bits on a 32-bit wide bus in one step, it can send it as 4 consecutive bytes using 8-bit bus - so there are details that can change the way these things work in practice. You better wait for someone using these things to comment on the practical applications.
 
  • #9
As people have said, there is no such thing as accuracy when we speak about uControllers.
8bit micro can do all the operation with 32bit integers or float type variables correctly just as 32 bit uC would, however 8 bit micro would need longer time, since the 32 bit variable will need to be stored in 4 8bit memory locations, and the hardware of the uC can only handle 8bits at the time. but it certainly won't make mistakes in multiplication for example :D

in order to read analog signal, such as voltage from pots or audio signal, you need to convert it to digital using DA converters, and if you are using converters integrated in microchips, like AD's of Microchip PICs for example, 8bit,16bit and 32 bit micros have AD converters that have 10 bit resolution, so to answer your question in such case the "accuracy" is the same.

Accuracy is not a good term to describe Analog to dig. converters, we usually speak in terms of resolution. 8bit converter gives you 2^8 steps. One more important factor that describes ADs is frequency.
Sometimes you might have a AD conv. that can convert signal with 32 bit resolution but it can only work with 10khz frequency for example. You can't use such device in audio applications because with it you can only capture small portion of audio spectum, up to 5khz, the rest would be lost.
 
  • #10
Electropioneer said:
Accuracy is not a good term to describe Analog to dig. converters, we usually speak in terms of resolution.
Accuracy, precision, and resolution are three different concepts. As you noted, saying "I'm using an 8 bit ADC for this" talks about resolution.

Suppose one has an instrument that reports values over a range of 0 to 9.99, in units of 0.01. That 0.01: That's the resolution of the instrument.

Now let's use that device to repeatedly measure what should be the same value. Suppose that we get readings such as 2.34, 2.39, 2.31, 2.41, 2.35, 2.29, ... The precision of the instrument is only 0.1 or so. That extra decimal place of resolution: That's just noise.

Finally, suppose that the experiment was set up so that the value should have been reported as 2.00. Oops. The accuracy of the instrument isn't all that great.
 
  • #11
as D H said, the number of bits dictates resolution not accuracy.

From wikipedia on accuracy and precision:
http://upload.wikimedia.org/wikipedia/commons/3/38/Accuracy_and_precision.svg

A low drift voltage reference would be precise but may not be accurate. If the reference voltage is measured to be exactly what it is claimed to be then it is both accurate and precise.

The reference to your ADC would be the governing factor for accuracy and precision. The number of bits your ADC generates dictates how fine the physical difference (e.g. volts) is between two consecutive numbers.

Suppose you have a 2.048V ideal reference. I chose 2.048 because it's a common reference voltage and when dividing by powers of two, the result is a number that's easy to work with.
Let's suppose you have a 10bit ADC.
2^10 = 1024.
2.048/1024 = 2mV resolution.

This means that the difference between a reading of 511 and 512 is 2mV.
 
  • #12
I admit I used the term rather loosely, perhaps too loosely forgetting this is physicsforums and I'm going to a well-deserved e-admonition, but yes, I'm aware of the differences :) Hysteresis, repeatability, resolution etc... yes, accuracy has many parameters. Regardless, thank you :-) This thread answered my question.
 
  • #13
I could, at this stage, bring in the fact that AC signals can be coded with a single bit and regenerated to any degree of accuracy (I probably mean resolution) you want. All that's necessary is that the sampling rate should be high enough to spread the quantising noise power over a wide enough frequency range.
 
  • #14
Also some confusiton may come from many uCs have analog inputs(and outputs), i.e. A-D converters built in, however, you need to review the datasheet for the uC to check the resolution of the inputs(and outputs), if you then need better resolution you probably need a separate component.
 

1. What is the difference between 8-bit, 16-bit, and 32-bit microcontrollers?

The difference between 8-bit, 16-bit, and 32-bit microcontrollers lies in their word size, which determines the amount of data that can be processed at once. An 8-bit microcontroller can process 8 bits of data at a time, while a 16-bit microcontroller can process 16 bits, and a 32-bit microcontroller can process 32 bits.

2. How does the word size affect the accuracy of a microcontroller?

The word size of a microcontroller affects its accuracy by determining the range of values that can be represented. A larger word size allows for a wider range of values, resulting in higher precision and accuracy in calculations and data processing.

3. Are 32-bit microcontrollers more accurate than 8-bit or 16-bit microcontrollers?

In general, 32-bit microcontrollers are more accurate than 8-bit or 16-bit microcontrollers due to their larger word size. However, the accuracy of a microcontroller also depends on other factors such as the quality of its components and the design of its circuitry.

4. What are the advantages and disadvantages of using a 32-bit microcontroller?

The main advantage of a 32-bit microcontroller is its higher accuracy and precision compared to 8-bit or 16-bit microcontrollers. Additionally, 32-bit microcontrollers have larger memory capacities and can handle more complex tasks. However, they can also be more expensive and consume more power than smaller word size microcontrollers.

5. How do I determine which word size of microcontroller is best for my project?

The best word size of microcontroller for your project depends on the specific requirements and needs of your project. If you require high accuracy and precision, a 32-bit microcontroller may be the best choice. However, if your project has simpler tasks and a limited budget, an 8-bit or 16-bit microcontroller may suffice. It is important to carefully consider your project's requirements and do research on different microcontrollers before making a decision.

Suggested for: Accuracy of 8bits vs 16bits vs 32bits microcontrollers

Replies
10
Views
535
Replies
32
Views
554
Replies
6
Views
747
Replies
11
Views
1K
Replies
1
Views
916
Replies
14
Views
2K
Back
Top