Jump to content

Instrument error

From Wikipedia, the free encyclopedia

Instrument error refers to a measurement error inherited from a measuring instrument.[1] It could be caused by manufacturing tolerances of components in the instrument, the accuracy of the instrument calibration, or a difference between the measurement condition and the calibration condition (e.g., the measurement is done at a temperature different than the calibration temperature).

Such errors are considered different than errors caused by different reasons; errors made during measurement reading, errors caused by human errors, and errors caused by a change in the measurement environment caused by the presence of the instrument affecting the environment.

Like all the other errors, instrument errors can be errors of various types, and the overall error is the sum of the individual errors.

Like the other errors, the instrument errors can also be classified by the following types based on the behavior of errors in the measurement repetitions.

  • Systematic errors
  • Random errors
  • Absolute error

Systematic errors

[edit]

A systematic error is an error that is kept during measurement-to-measurement at the same measurement condition. The size of the systematic error is sometimes referred to as the accuracy. For example, the instrument may always indicate a value 5% higher than the actual value; or perhaps the relationship between the indicated and actual values may be more complicated than that. A systematic error may arise because the instrument has been incorrectly calibrated, or perhaps because a defect has arisen in the instrument since it was calibrated. Instruments should be calibrated against a standard instrument that is known to be accurate, and ideally the calibration should be repeated at intervals. The most rigorous standards are those maintained by a standards organization such as NIST in the United States, or the ISO in Europe.

If the users know the amount of the systematic error, they may decide to adjust for it manually rather than having the instrument expensively adjusted to eliminate the error: e.g. in the above example they might manually reduce all the values read by about 4.8%.

Instrument impact on the measurement environment

[edit]

The act of taking the measurement may alter the quantity being measured. For example, an ammeter has its own built-in resistance, so if it is connected in series to an electrical circuit, it will slightly reduce the current flowing through the circuit.

Random errors

[edit]

A random error is an error that varies during measurement-to-measurement at the same measurement condition. The range in amount of possible random errors is sometimes referred to as the precision (the spread of measured values). Random errors may arise because of the design of the instrument.

The effect of random error can be reduced by repeating the measurement at the same controllable condition a few times and taking the average result.

Noise on measurements

[edit]

Electrical noise on electrical components in an instrument or temperature fluctuation on a quantity to measure may induce random errors in the measurement.

Measurement reading accuracy

[edit]

If the instrument has a needle which points to a scale graduated in steps of 0.1 units, then depending on the design of the instrument, it is usually possible to estimate tenths between the successive marks on the scale, so it should be possible to read off the result to an accuracy of about 0.01 units.

References

[edit]
  1. ^ Bolton, William (2021). "APPENDIX A. Errors". Instrumentation and Control Systems (3rd (Kindle) ed.). Newnes. ISBN 978-0-12-823471-6.