Accuracy of measuring instruments and measurements. Error, types and accuracy of measurements

ACCURACY OF MEASUREMENTS

ACCURACY OF MEASUREMENTS

A characteristic of the quality of measurements, reflecting the degree of closeness of the measurement results to the true value of the measured quantity. The less the measurement result deviates from the true value of the quantity, that is, the smaller its error, the higher the T. and., regardless of whether the error is systematic, random, or contains both components (see MEASUREMENT ERRORS). Sometimes in quality quantities. assessments T. and. indicate an error, but error is the opposite concept of accuracy and is more logical as an assessment of T. and. indicate the reciprocal value relates. error (without taking into account its sign). For example, if it relates. the error is ±10-5, then it is equal to 105.

Physical encyclopedic dictionary. - M.: Soviet Encyclopedia. Editor-in-chief A. M. Prokhorov. 1983 .


See what "MEASUREMENT ACCURACY" is in other dictionaries:

    Accuracy of measurements- Quality of measurements, reflecting the closeness of their results to the true value of the measured value Source: GOST 24846 81: Soils. Methods for measuring deformations of the foundations of buildings and structures...

    accuracy of measurements- - [L.G. Sumenko. English-Russian dictionary on information technology. M.: State Enterprise TsNIIS, 2003.] Topics information technology in general EN accuracy of measurements ...

    The use of so-called measuring instruments is constantly increasing with the growth of science (Measurement; Units of measures, absolute systems). It now depends not only on the careful preparation of instruments, but also on the discovery of new measurement principles. So … Encyclopedic Dictionary F.A. Brockhaus and I.A. Efron

    accuracy of measurements- verification. believe. The device is lying. see show time... Ideographic Dictionary of the Russian Language

    GOST R EN 306-2011: Heat exchangers. Measurements and measurement accuracy when determining power- Terminology GOST R EN 306 2011: Heat exchangers. Measurements and measurement accuracy when determining power: 3.31 impact magnitude: A quantity that is not the subject of measurement, but can influence the result obtained. Definitions of the term from... ... Dictionary-reference book of terms of normative and technical documentation

    measurement result accuracy- measurement accuracy One of the characteristics of measurement quality, reflecting the proximity to zero error of the measurement result. Note. It is believed that the smaller the measurement error, the greater its accuracy. [RMG 29 99] Topics: metrology,... ... Technical Translator's Guide

    accuracy- 3.1.1 accuracy: The degree of closeness of a measurement result to an accepted reference value. Note The term "accuracy", when referring to a series of measurement results, includes a combination of random components and an overall systematic... ... Dictionary-reference book of terms of normative and technical documentation

    Measuring instruments The degree of agreement between the readings of a measuring device and the true value of the measured quantity. The smaller the difference, the greater the accuracy of the device. The accuracy of a standard or measure is characterized by an error or degree ... ... Wikipedia

    accuracy- The degree of closeness of the measurement result to the accepted reference value. Note. The term "accuracy", when it refers to a series of measurement results (tests), includes a combination of random components and an overall systematic... ... Technical Translator's Guide

    measuring instrument accuracy- accuracy A characteristic of the quality of a measuring instrument, reflecting the proximity of its error to zero. Note. It is believed that the smaller the error, the more accurate the measuring instrument. [RMG 29 99] Topics metrology, basic concepts Synonyms accuracy ... Technical Translator's Guide

Books

  • Physical foundations of measurements in technology. food and chemical industries. Textbook, Popov Gennady Vasilievich, Zemskov Yuri Petrovich, Kvashnin Boris Nikolaevich Series: Textbooks for universities. Special literature Publisher: Lan,
  • Physical foundations of measurements in food and chemical industry technologies. Textbook, Popov Gennady Vasilievich, Zemskov Yuri Petrovich, Kvashnin Boris Nikolaevich, This manual provides brief theoretical information about the laws of measurements, measuring systems, elements of the physical picture of the world, as well as the principles of measurements based on ... Series: Textbooks for universities. Special literature Publisher:

The great Russian scientist Dmitri Ivanovich Mendeleev said: “Science begins where measurements begin.” During this lesson, you will learn what a measurement is, what the scale division of a measuring instrument is and how to calculate it, and also learn how to determine the error (inaccuracy) of measurement results.

Topic: Introduction

Lesson No. 2: Physical quantities and their measurement.

Accuracy and error of measurements.

The purpose of the lesson: get acquainted with the concept of “physical quantities”; learn to measure physical quantities using simple measuring instruments and determine the measurement error.

Equipment: ruler, beaker, thermometer, ammeter, voltmeter.

1. Checking homework (15 minutes).

1) The first student solves problem No. 5 at the board.

2) The second student solves problem No. 6 at the board.

3) The rest write a physical dictation.

4) How to ask additional questions from problem solvers at the board for questions about the paragraph and basic definitions.

6) As an additional question, ask 7 “A” about the messages on the piece of paper (what conclusions were drawn).

2. Studying new material (20 minutes).

You already know that to study various physical phenomena occurring with various physical bodies, you have to conduct experiments. And during experiments, it is necessary to measure various physical quantities, such as body mass, speed, time, height, length, width, etc. To measure physical quantities, various physical instruments are required.

2.1. What does it mean to measure a physical quantity?

(PZ): Measure a physical quantity - this means comparing it with another similar (as they say, homogeneous) physical quantity taken as a unit.

For example, the length of an object is compared to a unit of length, the mass of a body is compared to a unit of mass. But if one researcher measures the length, for example, of the distance traveled in fathoms, and another researcher measures it in feet, then it will probably be difficult for them to immediately understand each other.

Therefore, all over the world they try to measure physical quantities in the same units. In 1963, the International System of Units SI (SI - System International) was adopted. And it is in this system of units of measurement of physical quantities that we will continue to work.

For example, the most common physical quantities are length, mass and time. The International System of Units SI accepts:

Measure length in meters (m); unit of measurement – ​​1 m;

Measure mass in kilograms (kg), unit of measurement – ​​1 kg;

Time is measured in seconds (s), the unit of measurement is 1 s.

Of course, you know other, secondary units of measurement. For example, time can be measured in minutes or hours. But it is important to take into account that we will try to carry out all our subsequent calculations in the SI system.

Units that are 10, 100, 1000, 1,000,000, etc. times larger than the accepted units (so-called multiples) are often used.

For example: deca (dk) – 10, hecto (g) – 100, kilo (k) – 1000, mega (M) – 1,000,000, deci (d) – 0.1, centi (s) – 0.01, miles ( m) – 0.001.

Example: table length is 95 cm. Necessary V Express the length in meters (m)?

60 cm = 60 * 0.01 = 0.6 m

2.2. Scale division value of the measuring device

When taking measurements, it is very important to use measuring instruments correctly. You are already familiar with some instruments, such as a ruler and a thermometer. You have yet to get acquainted with others - the measuring cylinder, the voltmeter and the ammeter. But all these devices have one thing in common: they have a scale.

To work correctly with a measuring device, you must first pay attention to its measuring scale.

For example, consider the measuring scale of a very ordinary ruler.

Let's look at the ruler example in class together.

Using this ruler you can measure the length of any object, but not in SI units, but in centimeters. The scale of any device must indicate the units of measurement.

On the scale you see strokes (this is the name given to the lines marked on the scale). The spaces between the strokes are called scale divisions. Don't confuse strokes with divisions!

There are numbers next to some of the strokes.

In order to start working with any device, it is necessary to determine the scale division value of this device.

(PZ): The scale division value of a measuring instrument is the distance between the nearest scale strokes, expressed in units of the measured value. (in centimeters or millimeters for a ruler, in degrees for a thermometer, etc.).

To determine the value of a scale division of any measuring instrument, you need to select the two closest lines, next to which the numerical values ​​of the value are written. For example, two and one. Now you need to subtract the smaller from the larger value. The result must be divided by the number of divisions between the selected strokes

In our example, a student ruler.

Another example is a thermometer scale.

Rice. 2. Thermometer scale

We select the two nearest strokes with numbers, for example, 20 and 10 degrees Celsius (note that this scale also shows units of measurement, °C). There are 2 divisions between the selected strokes. Thus, we get

2.3. Measurement error and its determination.

To carry out measurements correctly, it is not enough to be able to determine the value of the instrument scale division. Remember that when talking about the distance from one point to another, we sometimes use expressions like “plus or minus half a kilometer.” This means that we do not know the exact distance, that in its measurement there was some inaccuracy, or, as they say, an error.

There is an error in any measurement; absolutely accurate instruments do not exist. And the magnitude of the error can also be determined by the scale of the measuring device.

(PZ): Measurement error is half the scale division of the measuring device.

Example 1. For example, a regular student ruler has a division value of 1 mm. Suppose we used it to measure the thickness of a piece of chalk and we got 12 mm. Half the price of a ruler division of 0.5 mm. This is the measurement error. If we denote the thickness of a piece of chalk by the letter b, then the measurement result is written as follows:

b = 12 + 0.5(mm)

The sign (plus or minus) means that during the measurement we could have made a mistake either up or down, that is, the width of a piece of chalk ranges from 11.5 mm to 12.5 mm.

I draw example No. 2 on the board with a smaller number of divisions, together with the class we calculate the central value and find the error.

Rice. 1. Regular ruler scale

CD = (2cm – 1cm)/5cm = 0.2cm = 2mm

Half the price of the ruler division in this case will be equal to 1 mm.

Then the width of the piece of chalk is b = 12 + 1(mm), that is, in this case, the width of a piece of chalk ranges from 11 mm to 13 mm. The scatter of measurements turned out to be larger.

In both cases, we made correct measurements, but in the first case the measurement error was smaller and the accuracy was higher than in the second, since the cost of dividing the ruler was less.

So from these two examples we can conclude:

(PZ): The lower the scale division of the device, the greater the accuracy (less error) of measurements using this device.

When recording values, taking into account the error, use the formula:

(PZ): A = a + ∆a,

where A is the measured quantity, a is the measurement result, ∆a is the measurement error.

3. Consolidation of the studied material (10 minutes).

Textbook: Exercise No. 1.

4. Homework.

Textbook: § 4, 5.

Problem book: No. 17, No. 39. (detailed description of problems)

(explain how to write down detailed solutions to problems!!!)

When using certain measurements in practice, it is important to evaluate their accuracy. The term “measurement accuracy,” i.e., the degree of approximation of measurement results to a certain actual value, does not have a strict definition and is used for qualitative comparison of measurement operations. For quantitative assessment, the concept of “measurement error” is used (the smaller the error, the higher the accuracy).

Error is the deviation of a measurement result from the actual (true) value of the measured quantity. It should be borne in mind that the true value of a physical quantity is considered unknown and is used in theoretical studies. The actual value of a physical quantity is established experimentally under the assumption that the result of the experiment (measurement) is as close as possible to the true value. Assessing measurement error is one of the important measures to ensure measurement uniformity.

Measurement errors are usually given in the technical documentation for measuring instruments or in regulatory documents. True, if we take into account that the error also depends on the conditions in which the measurement itself is carried out, on the experimental error of the technique and the subjective characteristics of a person in cases where he is directly involved in the measurements, then we can talk about several components of the measurement error, or about the total error .

The number of factors influencing the measurement accuracy is quite large, and any classification of measurement errors (Fig. 2) is to a certain extent arbitrary, since different errors, depending on the conditions of the measurement process, appear in different groups.

2.2 Types of errors

Measurement error is the deviation of the measurement result X from the true X and the value of the measured quantity. When determining measurement errors, instead of the true value of the physical quantity X and, its actual value X d is actually used.

Depending on the form of the expression, absolute, relative and reduced measurement errors are distinguished.

The absolute error is defined as the difference Δ"= X - X and or Δ = X - X d, and the relative error as the ratio δ = ± Δ / X d · 100%.

Reduced error γ= ±Δ/Χ N ·100%, where Χ N is the normalizing value of the quantity, which is used as the measuring range of the device, the upper measurement limit, etc.

The given true value for repeated measurements of the parameter is the arithmetic mean value:

= i,

where Xi is the result of the i-th measurement, n is the number of measurements.

Magnitude , obtained in one series of measurements, is a random approximation to X and. To assess its possible deviations from X, an estimate of the standard deviation of the arithmetic mean is determined:

S( )=

To assess the scattering of individual measurement results Xi relative to the arithmetic mean determine the sample standard deviation:

σ =

These formulas are used under the condition that the measured value remains constant during the measurement process.

These formulas correspond to the central limit theorem of probability theory, according to which the arithmetic mean of a series of measurements always has a smaller error than the error of each specific measurement:

S( )=σ /

This formula reflects the fundamental law of error theory. It follows from it that if it is necessary to increase the accuracy of the result (with systematic error excluded) by 2 times, then the number of measurements must be increased by 4 times; if the accuracy needs to be increased by 3 times, then the number of measurements

increase by 9 times, etc.

It is necessary to clearly distinguish between the use of the values ​​of S and σ: the first is used when assessing the errors of the final result, and the second is used when assessing the error of the measurement method. The most probable error of an individual measurement Δ in 0.67S.

Depending on the nature of the manifestation, the causes of occurrence and the possibilities of elimination, systematic and random measurement errors, as well as gross errors (misses), are distinguished.

The systematic error remains constant or changes naturally with repeated measurements of the same parameter.

The random error changes randomly under the same measurement conditions.

Gross errors (misses) arise due to erroneous operator actions, malfunction of measuring instruments, or sudden changes in measurement conditions. As a rule, gross errors are identified as a result of processing measurement results using special criteria.

The random and systematic components of the measurement error appear simultaneously, so that their total error is equal to the sum of the errors when they are independent.

The value of the random error is unknown in advance; it arises due to many unspecified factors. Random errors cannot be excluded from the results, but their influence can be reduced by processing the measurement results.

For practical purposes, it is very important to be able to correctly formulate the requirements for measurement accuracy. For example, if we take Δ = 3σ as the permissible manufacturing error, then by increasing the accuracy requirements (for example, to Δ = σ), while maintaining the manufacturing technology, we increase the probability of defects.

It is generally believed that systematic errors can be detected and eliminated. However, in real conditions it is impossible to completely eliminate these errors. There are always some non-excluded residuals that need to be taken into account in order to estimate their boundaries. This will be the systematic measurement error.

In other words, in principle, the systematic error is also random and the indicated division is due only to the established traditions of processing and presenting measurement results.

Unlike random error, which is identified as a whole regardless of its sources, systematic error is considered in its components depending on the sources of its occurrence. There are subjective, methodological and instrumental components of error.

The subjective component of the error is associated with the individual characteristics of the operator. Typically, this error occurs due to reading errors (approximately 0.1 scale division) and incorrect operator skills. Basically, systematic error arises due to methodological and instrumental components.

The methodological component of the error is due to the imperfection of the measurement method, methods of using measuring instruments, incorrect calculation formulas and rounding of results.

The instrumental component arises due to the intrinsic error of the measuring instruments, determined by the accuracy class, the influence of the measuring instruments on the result and the limited resolution of the measuring instruments.

The expediency of dividing the systematic error into methodological and instrumental components is explained by the following:

To increase the accuracy of measurements, limiting factors can be identified, and, therefore, a decision can be made to improve the methodology or select more accurate measurement tools;

It becomes possible to determine the component of the total error that increases over time or under the influence of external factors, and, therefore, to purposefully carry out periodic verification and certification;

The instrumental component can be assessed before the development of the method, and the potential accuracy of the selected method will be determined only by the methodological component.

2.3 Measurement quality indicators

The uniformity of measurements, however, cannot be ensured only by the coincidence of errors. When carrying out measurements, it is also important to know the quality indicators of the measurements. The quality of measurements is understood as a set of properties that determine the receipt of results with the required accuracy characteristics, in the required form and on time.

The quality of measurements is characterized by such indicators as accuracy, correctness and reliability. These indicators should be determined by assessments, which are subject to the requirements of consistency, unbiasedness and efficiency.

The true value of the measured quantity differs from the arithmetic mean value of the observation results by the amount of systematic error Δ c, i.e. X = -Δ s. If the systematic component is excluded, then X = .

However, due to the limited number of observations, the value It is also impossible to accurately determine. You can only estimate its value and indicate with a certain probability the boundaries of the interval in which it lies. Evaluation The numerical characteristic of the distribution law X, depicted by a point on the numerical axis, is called a point characteristic. Unlike numerical characteristics, estimates are random variables, and their value depends on the number of observations n. A consistent estimate is one that, as n→∞, reduces in probability to the value being estimated.

An unbiased estimate is one whose mathematical expectation is equal to the value being estimated.

An estimate that has the smallest variance σ 2 = min is called effective.

The listed requirements are satisfied by the arithmetic mean value resultsn observations.

Thus, the result of an individual measurement is a random variable. Then the measurement accuracy is the closeness of the measurement results to the true value of the measured value. If systematic error components are excluded, then the accuracy of the measurement result characterized by the degree of dispersion of its value, i.e. dispersion. As shown above, the dispersion of the arithmetic mean σ is n times less than the dispersion of an individual observation result.

N Figure 3 shows the distribution density of the individual and total measurement results. The narrower shaded area refers to the probability density distribution of the mean value. The accuracy of measurements is determined by the proximity to zero of the systematic error.

The reliability of measurements is determined by the degree of confidence in the result and is characterized by the probability that the true value of the measured value lies in the specified vicinity of the actual value. These probabilities are called confidence limits, and the boundaries (neighborhoods) are called confidence limits. In other words, the reliability of a measurement is the proximity to zero of the non-excluded systematic error.

A confidence interval with boundaries (or confidence limits) from – Δ d to + Δ d is the interval of random error values ​​that, with a given confidence probability P d, covers the true value of the measured value.

R d ( - Δ d ≤,Х ≤ + Δ d).

With a small number of measurements (n 20) and using the normal law, it is not possible to determine the confidence interval, since the normal distribution law describes the behavior of a random error in principle for an infinitely large number of measurements.

Therefore, with a small number of measurements, the Student distribution or t - distribution is used (proposed by the English statistician Gosset, who published under the pseudonym “student”), which makes it possible to determine confidence intervals for a limited number of measurements. The boundaries of the confidence interval are determined by the formula:

Δ d = t S( ),

where t is the Student distribution coefficient, depending on the specified confidence probability P d and the number of measurements n.

As the number of observations n increases, the Student distribution quickly approaches normal and coincides with it already for n ≥30.

It should be noted that measurement results that do not have reliability, that is, a degree of confidence in their correctness, are of no value. For example, a sensor of a measuring circuit may have very high metrological characteristics, but the influence of errors from its installation, external conditions, methods of recording and signal processing will lead to a large final measurement error.

Along with such indicators as accuracy, reliability and correctness, the quality of measurement operations is also characterized by the convergence and reproducibility of results. These indicators are most common when assessing the quality of tests and characterize their accuracy.

Obviously, two tests of the same object using the same method do not give identical results. Their objective measure can be statistically based estimates of the expected similarity of the results of two or more tests obtained with strict adherence to their methodology. Convergence and reproducibility are taken as such statistical assessments of the consistency of test results.

Convergence is the closeness of the results of two tests obtained by the same method, on identical installations, in the same laboratory. Reproducibility differs from repeatability in that both results must be obtained in different laboratories.

Measurement accuracy is the degree of approximation of measurement results to some actual value of a physical quantity. The lower the accuracy, the greater the measurement error and, accordingly, the smaller the error, the higher the accuracy.

Even the most accurate instruments cannot show the actual value of the measured value. There is definitely a measurement error, which can be caused by various factors.

Errors may be:

systematic, for example, if the strain resistance is poorly glued to the elastic element, then the deformation of its lattice will not correspond to the deformation of the elastic element and the sensor will constantly respond incorrectly;

random, caused, for example, by improper functioning of the mechanical or electrical elements of the measuring device;

rude, As a rule, they are allowed by the performer himself, who, due to inexperience or fatigue, incorrectly reads the instrument readings or makes mistakes when processing information. They can be caused by a malfunction of measuring instruments or a sudden change in measurement conditions.

It is almost impossible to completely eliminate errors, but it is necessary to establish the limits of possible measurement errors and, therefore, the accuracy of their implementation

Classification and metrological characteristics of measuring instruments

Measuring instruments approved by Gosstandart of Russia are registered in the state Register of Measuring Instruments, certified by certificates of conformity, and only after that are allowed for use on the territory of the Russian Federation.

Reference publications adopt the following structure for describing measuring instruments: registration number, name, number and validity period of the certificate of approval of the type of measuring instrument, location of the manufacturer and basic metrological characteristics. The latter evaluate the suitability of measuring instruments for measurements in a known range with a known accuracy.

Metrological characteristics of measuring instruments provide:

Possibility of establishing measurement accuracy;

Achieving interchangeability and comparing measuring instruments with each other;

Selection of the necessary measuring instruments for accuracy and other characteristics;

Determination of errors of measuring systems and installations;

Assessment of the technical condition of measuring instruments during their verification.

The metrological characteristics established by the documents are considered valid. In practice, the following metrological characteristics of measuring instruments are most common:

measuring range- range of values ​​of the measured quantity for which the permissible limits of SI error are normalized;



measurement limit- the largest or smallest value of the measurement range. For measures, this is the nominal value of the reproducible quantity.

Meter scale- a graduated set of marks and numbers on the reading device of a measuring instrument, corresponding to a number of successive values ​​of the measured quantity

Scale division price- the difference in the values ​​of quantities corresponding to two adjacent scale marks. Devices with a uniform scale have a constant scale, while those with an uneven scale have a variable scale. In this case, the minimum division price is normalized.

The main standardized metrological characteristic of measuring instruments is error, i.e., the difference between the readings of measuring instruments and the true (actual) values ​​of physical quantities.

All errors depending on external conditions are divided into basic and additional.

Main error - This is an error under normal operating conditions.

In practice, when there is a wider range of influencing quantities, it is also normalized additional error measuring instruments.

The limit of permissible error is the largest error caused by a change in the influencing quantity, at which the measuring instrument can be approved for use according to the technical requirements.

Accuracy class - this is a generalized metrological characteristic that determines various properties of a measuring instrument. For example, for indicating electrical measuring instruments, the accuracy class, in addition to the main error, also includes a variation in readings, and for measures of electrical quantities - the amount of instability (percentage change in the value of the measure during the year).

The accuracy class of a measuring instrument already includes systematic and random errors. However, it is not a direct characteristic of the accuracy of measurements performed using these measuring instruments, since the measurement accuracy also depends on the measurement technique, the interaction of the measuring instrument with the object, measurement conditions, etc.

Error is the deviation of the result of measuring a physical quantity (for example: pressure) from the true value of the measured quantity. Error arises as a result of imperfection of the method or technology. measuring instruments, insufficient consideration of the influence of external conditions on the measurement process, the specific nature of the measured quantities themselves and other factors.

The accuracy of the measurements is characterized by the closeness of their results to the true value of the measured quantities. There is a concept of absolute and relative measurement error.

The absolute measurement error is the difference between the measurement result and the actual value of the measured quantity:

DX=Q-X,(6.16)

The absolute error is expressed in units of the measured value (kgf/cm2, etc.)

The relative measurement error characterizes the quality of the measurement results and is defined as the ratio of the absolute error DX to the actual value of the quantity:

d X=DX/ X , (6.17)

Relative error is usually expressed as a percentage.

Depending on the reasons leading to measurement error, there are systematic And random errors.

Systematic measurement errors include errors that, during repeated measurements under the same conditions, manifest themselves in the same way, i.e., they remain constant or their values ​​change according to a certain law. Such measurement errors are determined quite accurately.

Random errors are errors whose values ​​are measured during repeated measurements of a physical quantity, performed in the same way.

The error of instruments is assessed as a result of their verification, i.e., a set of actions (measures) aimed at comparing instrument readings with the actual value of the measured value. When checking working instruments, the actual value of the measured quantity is taken to be the value of standard measures or readings of standard instruments. When assessing the error of standard measuring instruments, the value of the standard measures or the readings of the standard instruments are taken as the actual value of the measurement of the quantity.

The main error is the error inherent in the measuring instrument under normal conditions (atmospheric pressure, Тair = 20 degrees, humidity 50-80%).

Additional error is an error caused by measuring one of the influencing quantities beyond normal conditions. (for example temperature, average measurement)

The concept of accuracy classes. The accuracy class is a generalized characteristic of measuring instruments, defined by the limits of permissible basic and additional errors, as well as other properties of these instruments that may affect their accuracy. The accuracy class is expressed by a number that coincides with the value of the permissible error.

A standard pressure gauge (sensor) of accuracy class 0.4 has an acceptable error = 0.4% of the measurement limit, i.e. the error of a standard pressure gauge with a measurement limit of 30 MPa should not exceed +-0.12 MPa.

Accuracy classes of pressure measuring instruments: 0.16; 0.25; 0.4; 0.6; 1.0; 1.5; 2.5.

Sensitivity devices is called the ratio of the movement of its pointer D n (arrow direction) to the change in the value of the measured quantity that caused this movement. Thus, the higher the accuracy of the device, the greater the sensitivity, as a rule.

The main characteristics of measuring instruments are determined in the process of special tests, including calibration, during which the calibration characteristic of the device is determined, i.e. the relationship between its readings and the values ​​of the measured quantity. The calibration characteristic is compiled in the form of graphs, formulas or tables.