3. Approximation of functions using the method
least squares
The least squares method is used when processing the results of the experiment for approximations (approximations) experimental data analytical formula. The specific form of the formula is chosen, as a rule, from physical considerations. These formulas can be:
and others.
The essence of the least squares method is as follows. Let the measurement results be presented in the table:
Table 4 |
||||
x n |
||||
y n |
(3.1) |
where f is a known function, a 0 , a 1 , …, a m - unknown constant parameters, the values of which must be found. In the least squares method, the approximation of function (3.1) to the experimental dependence is considered to be the best if the condition
(3.2) |
that is amounts a squared deviations of the desired analytical function from the experimental dependence should be minimal .
Note that the function Q called inviscid.
Since the discrepancy
then it has a minimum. A necessary condition for the minimum of a function of several variables is the equality to zero of all partial derivatives of this function with respect to the parameters. Thus, finding the best values of the parameters of the approximating function (3.1), that is, those values for which Q = Q (a 0 , a 1 , …, a m ) is minimal, reduces to solving the system of equations:
(3.3) |
The method of least squares can be given the following geometric interpretation: among an infinite family of lines of a given type, one line is found for which the sum of the squared differences in the ordinates of the experimental points and the corresponding ordinates of the points found by the equation of this line will be the smallest.
Finding the parameters of a linear function
Let the experimental data be represented by a linear function:
It is required to choose such values a and b , for which the function
(3.4) |
will be minimal. The necessary conditions for the minimum of the function (3.4) are reduced to the system of equations:
|
After transformations, we obtain a system of two linear equations with two unknowns:
|
(3.5) |
solving which , we find the desired values of the parameters a and b .
Finding the parameters of a quadratic function
If the approximating function is a quadratic dependence
then its parameters a , b , c find from the minimum condition of the function:
(3.6) |
The minimum conditions for the function (3.6) are reduced to the system of equations:
|
After transformations, we obtain a system of three linear equations with three unknowns:
|
(3.7) |
at solving which we find the desired values of the parameters a , b and c .
Example . Let the following table of values be obtained as a result of the experiment x and y :
Table 5 |
||||||||
y i |
0,705 |
0,495 |
0,426 |
0,357 |
0,368 |
0,406 |
0,549 |
0,768 |
It is required to approximate the experimental data by linear and quadratic functions.
Solution. Finding the parameters of the approximating functions reduces to solving systems of linear equations (3.5) and (3.7). To solve the problem, we use a spreadsheet processor excel.
1. First we link sheets 1 and 2. Enter the experimental values x i and y i into columns A and B, starting from the second line (in the first line we put the column headings). Then we calculate the sums for these columns and put them in the tenth row.
In columns C–G place the calculation and summation respectively
2. Unhook the sheets. Further calculations will be carried out in a similar way for the linear dependence on Sheet 1 and for the quadratic dependence on Sheet 2.
3. Under the resulting table, we form a matrix of coefficients and a column vector of free members. Let's solve the system of linear equations according to the following algorithm:
To calculate the inverse matrix and multiply matrices, we use Master functions and functions MOBR and MUMNOZH.
4. In the cell block H2: H 9 based on the obtained coefficients, we calculate values of the approximating polynomialy i calc., in block I 2: I 9 - deviations D y i = y i exp. - y i calc., in column J - the discrepancy:
Tables obtained and built using Chart Wizards graphs are shown in figures 6, 7, 8.
Rice. 6. Table for calculating the coefficients of a linear function,
approximating experimental data.
Rice. 7. Table for calculating the coefficients of a quadratic function,
approximatingexperimental data.
Rice. 8. Graphical representation of the results of the approximation
experimental data linear and quadratic functions.
Answer. The experimental data were approximated by the linear dependence y = 0,07881 x + 0,442262 with residual Q = 0,165167 and quadratic dependence y = 3,115476 x 2 – 5,2175 x + 2,529631 with residual Q = 0,002103 .
Tasks. Approximate the function given by tabular, linear and quadratic functions.
Table 6 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№0 |
x |
0,1 |
0,2 |
0,3 |
0,4 |
0,5 |
0,6 |
0,7 |
0,8 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
y |
3,030 |
3,142 |
3,358 |
3,463 |
3,772 |
3,251 |
3,170 |
3,665 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 1 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
3,314 |
3,278 |
3,262 |
3,292 |
3,332 |
3,397 |
3,487 |
3,563 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 2 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1,045 |
1,162 |
1,264 |
1,172 |
1,070 |
0,898 |
0,656 |
0,344 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 3 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
6,715 |
6,735 |
6,750 |
6,741 |
6,645 |
6,639 |
6,647 |
6,612 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 4 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
2,325 |
2,515 |
2,638 |
2,700 |
2,696 |
2,626 |
2,491 |
2,291 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 5 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1.752 |
1,762 |
1,777 |
1,797 |
1,821 |
1,850 |
1,884 |
1,944 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 6 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1,924 |
1,710 |
1,525 |
1,370 |
1,264 |
1,190 |
1,148 |
1,127 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 7 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1,025 |
1,144 |
1,336 |
1,419 |
1,479 |
1,530 |
1,568 |
1,248 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 8 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5,785 |
5,685 |
5,605 |
5,545 |
5,505 |
5,480 |
5,495 |
5,510 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
№ 9 |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
4,052 |
4,092 |
4,152 |
4,234 |
4,338 |
4,468 |
4,599 |
We approximate the function by a polynomial of the 2nd degree. To do this, we calculate the coefficients of the normal system of equations: , , Let us compose a normal system of least squares, which has the form: The solution of the system is easy to find:, , . Thus, the polynomial of the 2nd degree is found: . Theoretical background Back to page<Введение в вычислительную математику. Примеры> Example 2. Finding the optimal degree of a polynomial. Back to page<Введение в вычислительную математику. Примеры> Example 3. Derivation of a normal system of equations for finding the parameters of an empirical dependence. Let us derive a system of equations for determining the coefficients and functions , which performs the root-mean-square approximation of the given function with respect to points. Compose a function and write the necessary extremum condition for it: Then the normal system will take the form: Got linear system equations for unknown parameters and which is easily solved. Theoretical background Back to page<Введение в вычислительную математику. Примеры> Example. Experimental data on the values of variables X and at are given in the table. As a result of their alignment, the function Using least square method, approximate these data with a linear dependence y=ax+b(find options a and b). Find out which of the two lines is better (in the sense of the least squares method) aligns the experimental data. Make a drawing. The essence of the method of least squares (LSM).The problem is to find the coefficients linear dependence, for which the function of two variables a and baccepts smallest value. That is, given the data a and b the sum of the squared deviations of the experimental data from the found straight line will be the smallest. This is the whole point of the least squares method. Thus, the solution of the example is reduced to finding the extremum of a function of two variables. Derivation of formulas for finding coefficients.A system of two equations with two unknowns is compiled and solved. Finding partial derivatives of functions by variables a and b, we equate these derivatives to zero. We solve the resulting system of equations by any method (for example substitution method or Cramer's method) and obtain formulas for finding coefficients using the least squares method (LSM). With data a and b function takes the smallest value. The proof of this fact is given below in the text at the end of the page. That's the whole method of least squares. Formula for finding the parameter a contains the sums , , , and the parameter n is the amount of experimental data. The values of these sums are recommended to be calculated separately. Coefficient b found after calculation a. It's time to remember the original example. Solution. In our example n=5. We fill in the table for the convenience of calculating the amounts that are included in the formulas of the required coefficients. The values in the fourth row of the table are obtained by multiplying the values of the 2nd row by the values of the 3rd row for each number i. The values in the fifth row of the table are obtained by squaring the values of the 2nd row for each number i. The values of the last column of the table are the sums of the values across the rows. We use the formulas of the least squares method to find the coefficients a and b. We substitute in them the corresponding values from the last column of the table: Consequently, y=0.165x+2.184 is the desired approximating straight line. It remains to find out which of the lines y=0.165x+2.184 or better approximates the original data, i.e. to make an estimate using the least squares method. Estimation of the error of the method of least squares.To do this, you need to calculate the sums of squared deviations of the original data from these lines and , a smaller value corresponds to a line that better approximates the original data in terms of the least squares method. Since , then the line y=0.165x+2.184 approximates the original data better. Graphic illustration of the least squares method (LSM).Everything looks great on the charts. The red line is the found line y=0.165x+2.184, the blue line is , the pink dots are the original data. What is it for, what are all these approximations for? I personally use to solve data smoothing problems, interpolation and extrapolation problems (in the original example, you could be asked to find the value of the observed value y at x=3 or when x=6 according to the MNC method). But we will talk more about this later in another section of the site. Top of page Proof. So that when found a and b function takes the smallest value, it is necessary that at this point the matrix of the quadratic form of the second-order differential for the function was positive definite. Let's show it. The second order differential has the form: That is Therefore, the matrix of the quadratic form has the form Let us show that the matrix is positive definite. This requires that the angle minors be positive. Angular minor of the first order . The inequality is strict, since the points do not coincide. This will be implied in what follows. Angular minor of the second order Let's prove that method of mathematical induction. Conclusion: found values a and b correspond to the smallest value of the function , therefore, are the desired parameters for the least squares method. Ever understand? Top of page Development of a forecast using the least squares method. Problem solution exampleExtrapolation - this is a method of scientific research, which is based on the dissemination of past and present trends, patterns, relationships to the future development of the object of forecasting. Extrapolation methods include moving average method, exponential smoothing method, least squares method. Essence least squares method consists in minimizing the sum of square deviations between the observed and calculated values. The calculated values are found according to the selected equation - the regression equation. The smaller the distance between the actual values and the calculated ones, the more accurate the forecast based on the regression equation. The theoretical analysis of the essence of the phenomenon under study, the change in which is displayed by a time series, serves as the basis for choosing a curve. Considerations about the nature of the growth of the levels of the series are sometimes taken into account. So, if the growth of output is expected in an arithmetic progression, then smoothing is performed in a straight line. If it turns out that the growth is exponential, then smoothing should be done according to the exponential function. The working formula of the method of least squares : Y t+1 = a*X + b, where t + 1 is the forecast period; Уt+1 – predicted indicator; a and b are coefficients; X is a symbol of time. Coefficients a and b are calculated according to the following formulas: where, Uf - the actual values of the series of dynamics; n is the number of levels in the time series; The smoothing of time series by the least squares method serves to reflect the patterns of development of the phenomenon under study. In the analytic expression of a trend, time is considered as an independent variable, and the levels of the series act as a function of this independent variable. The development of a phenomenon does not depend on how many years have passed since the starting point, but on what factors influenced its development, in what direction and with what intensity. From this it is clear that the development of a phenomenon in time appears as a result of the action of these factors. Correctly setting the type of curve, the type of analytical dependence on time is one of the most difficult tasks of pre-predictive analysis. . The selection of the type of function that describes the trend, the parameters of which are determined by the least squares method, is in most cases empirical, by constructing a number of functions and comparing them with each other by the value of the root-mean-square error calculated by the formula: where Uf - the actual values of the series of dynamics; Ur – calculated (smoothed) values of the time series; n is the number of levels in the time series; p is the number of parameters defined in the formulas describing the trend (development trend). Disadvantages of the least squares method :
An example of using the least squares method to develop a forecastA task . There are data characterizing the level of unemployment in the region, %
Least squares solution For the solution, we will compile a table in which we will make the necessary calculations: ε = 28.63/10 = 2.86% forecast accuracy high. Conclusion : Comparing the results obtained in the calculations moving average method , exponential smoothing and the least squares method, we can say that the average relative error in calculations by the exponential smoothing method falls within 20-50%. This means that the prediction accuracy in this case is only satisfactory. In the first and third cases, the forecast accuracy is high, since the average relative error is less than 10%. But the moving average method made it possible to obtain more reliable results (forecast for November - 1.52%, forecast for December - 1.53%, forecast for January - 1.49%), since the average relative error when using this method is the smallest - 1 ,13%. Least square methodOther related articles:List of sources used
MNE ProgramEnter dataData and Approximation y = a + b xi- number of the experimental point; Data and Approximation y = kx
Click on the chart User manual for the MNC online program.In the data field, enter on each separate line the values of `x` and `y` at one experimental point. Values must be separated by whitespace (space or tab). The third value can be the point weight of `w`. If the point weight is not specified, then it is equal to one. In the overwhelming majority of cases, the weights of the experimental points are unknown or not calculated; all experimental data are considered equivalent. Sometimes the weights in the studied range of values are definitely not equivalent and can even be calculated theoretically. For example, in spectrophotometry, weights can be calculated using simple formulas, although basically everyone neglects this to reduce labor costs. Data can be pasted through the clipboard from an office suite spreadsheet, such as Excel from Microsoft Office or Calc from Open Office. To do this, in the spreadsheet, select the range of data to copy, copy to the clipboard, and paste the data into the data field on this page. To calculate by the least squares method, at least two points are required to determine two coefficients `b` - the tangent of the angle of inclination of the straight line and `a` - the value cut off by the straight line on the `y` axis. To estimate the error of the calculated regression coefficients, it is necessary to set the number of experimental points to more than two. Least squares method (LSM).The greater the number of experimental points, the more accurate the statistical estimate of the coefficients (due to the decrease in the Student's coefficient) and the closer the estimate to the estimate of the general sample. Obtaining values at each experimental point is often associated with significant labor costs, therefore, a compromise number of experiments is often carried out, which gives a digestible estimate and does not lead to excessive labor costs. As a rule, the number of experimental points for a linear least squares dependence with two coefficients is chosen in the region of 5-7 points. A Brief Theory of Least Squares for Linear DependenceSuppose we have a set of experimental data in the form of pairs of values [`y_i`, `x_i`], where `i` is the number of one experimental measurement from 1 to `n`; `y_i` - the value of the measured value at the point `i`; `x_i` - the value of the parameter we set at the point `i`. An example is the operation of Ohm's law. By changing the voltage (potential difference) between sections of the electrical circuit, we measure the amount of current passing through this section. Physics gives us the dependence found experimentally: `I=U/R`, In this case, `y_i` is the measured current value, and `x_i` is the voltage value. As another example, consider the absorption of light by a solution of a substance in solution. Chemistry gives us the formula: `A = εl C`, In this case, `y_i` is the measured optical density `A`, and `x_i` is the concentration of the substance that we set. We will consider the case when the relative error in setting `x_i` is much less than the relative error in measuring `y_i`. We will also assume that all measured values of `y_i` are random and normally distributed, i.e. obey normal law distribution. In the case of a linear dependence of `y` on `x`, we can write the theoretical dependence: From a geometric point of view, the coefficient `b` denotes the tangent of the angle of inclination of the line to the `x` axis, and the coefficient `a` - the value of `y` at the point of intersection of the line with the `y` axis (for `x = 0`). Finding the parameters of the regression line.In an experiment, the measured values of `y_i` cannot lie exactly on the theoretical line due to measurement errors, which are always inherent in real life. Therefore, a linear equation must be represented by a system of equations: Dependence (1) is also called regression, i.e. the dependence of the two quantities on each other with statistical significance. The task of restoring the dependence is to find the coefficients `a` and `b` from the experimental points [`y_i`, `x_i`]. To find the coefficients `a` and `b` is usually used least square method(MNK). It is a special case of the maximum likelihood principle. Let's rewrite (1) as `ε_i = y_i - a - b x_i`. Then the sum of squared errors will be The principle of the least squares method is to minimize the sum (2) with respect to the parameters `a` and `b`. The minimum is reached when the partial derivatives of the sum (2) with respect to the coefficients `a` and `b` are equal to zero: Expanding the derivatives, we obtain a system of two equations with two unknowns: We open the brackets and transfer the sums independent of the desired coefficients to the other half, we get a system of linear equations: Solving the resulting system, we find formulas for the coefficients `a` and `b`: `a = frac(sum_(i=1)^(n) y_i sum_(i=1)^(n) x_i^2 - sum_(i=1)^(n) x_i sum_(i=1)^(n ) x_iy_i) (n sum_(i=1)^(n) x_i^2 — (sum_(i=1)^(n) x_i)^2)` (3.1) `b = frac(n sum_(i=1)^(n) x_iy_i - sum_(i=1)^(n) x_i sum_(i=1)^(n) y_i) (n sum_(i=1)^ (n) x_i^2 - (sum_(i=1)^(n) x_i)^2)` (3.2) These formulas have solutions when `n > 1` (the line can be drawn using at least 2 points) and when the determinant `D = n sum_(i=1)^(n) x_i^2 — (sum_(i= 1)^(n) x_i)^2 != 0`, i.e. when the `x_i` points in the experiment are different (i.e. when the line is not vertical). Estimation of errors in the coefficients of the regression lineFor a more accurate estimate of the error in calculating the coefficients `a` and `b`, a large number of experimental points is desirable. When `n = 2`, it is impossible to estimate the error of the coefficients, because the approximating line will uniquely pass through two points. The error of the random variable `V` is determined error accumulation law Let's write the law of accumulation of errors for the error of the coefficients `a` and `b` `S_y^2 = S_(y_i)^2` - the error (variance, squared standard deviation) in the `y` dimension, assuming that the error is uniform for all `y` values. Substituting formulas for calculating `a` and `b` into the resulting expressions, we get `S_a^2 = S_y^2 frac(sum_(i=1)^(n) (sum_(i=1)^(n) x_i^2 - x_i sum_(i=1)^(n) x_i)^2 ) (D^2) = S_y^2 frac((n sum_(i=1)^(n) x_i^2 - (sum_(i=1)^(n) x_i)^2) sum_(i=1) ^(n) x_i^2) (D^2) = S_y^2 frac(sum_(i=1)^(n) x_i^2) (D)` (4.1) `S_b^2 = S_y^2 frac(sum_(i=1)^(n) (n x_i - sum_(i=1)^(n) x_i)^2) (D^2) = S_y^2 frac( n (n sum_(i=1)^(n) x_i^2 - (sum_(i=1)^(n) x_i)^2)) (D^2) = S_y^2 frac(n) (D) ` (4.2) In most real experiments, the value of `Sy` is not measured. To do this, it is necessary to carry out several parallel measurements (experiments) at one or several points of the plan, which increases the time (and possibly cost) of the experiment. Therefore, it is usually assumed that the deviation of `y` from the regression line can be considered random. The variance estimate `y` in this case is calculated by the formula. `S_y^2 = S_(y, rest)^2 = frac(sum_(i=1)^n (y_i - a - b x_i)^2) (n-2)`. The divisor `n-2` appears because we have reduced the number of degrees of freedom due to the calculation of two coefficients for the same sample of experimental data. This estimate is also called the residual variance relative to the regression line `S_(y, rest)^2`. The assessment of the significance of the coefficients is carried out according to the Student's criterion `t_a = frac(|a|) (S_a)`, `t_b = frac(|b|) (S_b)` If the calculated criteria `t_a`, `t_b` are less than the table criteria `t(P, n-2)`, then it is considered that the corresponding coefficient is not significantly different from zero with a given probability `P`. To assess the quality of the description of a linear relationship, you can compare `S_(y, rest)^2` and `S_(bar y)` relative to the mean using the Fisher criterion. `S_(bar y) = frac(sum_(i=1)^n (y_i - bar y)^2) (n-1) = frac(sum_(i=1)^n (y_i - (sum_(i= 1)^n y_i) /n)^2) (n-1)` - sample estimate of the variance of `y` relative to the mean. To evaluate the effectiveness of the regression equation for describing the dependence, the Fisher coefficient is calculated If `F > F(P, n-1, n-2)`, the difference between the description of the dependence `y = f(x)` using the regression equation and the description using the mean is considered statistically significant with probability `P`. Those. the regression describes the dependence better than the spread of `y` around the mean. Click on the chart Least square method. The method of least squares means the determination of unknown parameters a, b, c, the accepted functional dependenceThe method of least squares means the determination of unknown parameters a, b, c,… accepted functional dependence y = f(x,a,b,c,…), which would provide a minimum of the mean square (variance) of the error , (24) where x i , y i - set of pairs of numbers obtained from the experiment. Since the condition for the extremum of a function of several variables is the condition that its partial derivatives are equal to zero, then the parameters a, b, c,… are determined from the system of equations: ; ; ; … (25) It must be remembered that the least squares method is used to select parameters after the form of the function y = f(x) defined. If from theoretical considerations it is impossible to draw any conclusions about what the empirical formula should be, then one has to be guided by visual representations, primarily a graphical representation of the observed data. In practice, most often limited to the following types of functions: 1) linear ; 2) quadratic a . If some physical quantity depends on another quantity, then this dependence can be investigated by measuring y at different values of x. As a result of measurements, a series of values is obtained: x 1 , x 2 , ..., x i , ... , x n ; y 1 , y 2 , ..., y i , ... , y n . Based on the data of such an experiment, it is possible to plot the dependence y = ƒ(x). The resulting curve makes it possible to judge the form of the function ƒ(x). However, the constant coefficients that enter into this function remain unknown. They can be determined using the least squares method. The experimental points, as a rule, do not lie exactly on the curve. The method of least squares requires that the sum of the squared deviations of the experimental points from the curve, i.e. 2 was the smallest. In practice, this method is most often (and most simply) used in the case of a linear relationship, i.e. when y=kx or y = a + bx. Linear dependence is very widespread in physics. And even when the dependence is non-linear, they usually try to build a graph in such a way as to get a straight line. For example, if it is assumed that the refractive index of glass n is related to the wavelength λ of the light wave by the relation n = a + b/λ 2 , then the dependence of n on λ -2 is plotted on the graph. Consider the dependence y=kx(straight line passing through the origin). Let us compose the value φ the sum of the squared deviations of our points from the straight line The value of φ is always positive and turns out to be the smaller, the closer our points lie to the straight line. The method of least squares states that for k one should choose such a value at which φ has a minimum
The calculation shows that the root-mean-square error in determining the value of k is equal to , (20) Let us now consider a somewhat more difficult case, when the points must satisfy the formula y = a + bx(a straight line not passing through the origin). The task is to find the given set of values x i , y i best values a and b. Again we compose a quadratic form φ equal to the sum of the squared deviations of the points x i , y i from the straight line and find the values a and b for which φ has a minimum ; . .The joint solution of these equations gives (21) The root-mean-square errors of determining a and b are equal (23) .  (24) When processing the measurement results by this method, it is convenient to summarize all the data in a table in which all the amounts included in formulas (19)(24) are preliminarily calculated. The forms of these tables are shown in the examples below. Example 1 The basic equation of dynamics was studied rotary motionε = M/J (straight line passing through the origin). For various values of the moment M, the angular acceleration ε of a certain body was measured. It is required to determine the moment of inertia of this body. The results of measurements of the moment of force and angular acceleration are listed in the second and third columns tables 5. Table 5
By formula (19) we determine: . To determine the root-mean-square error, we use formula (20) 0.005775kg-one · m -2 . By formula (18) we have ; .SJ = (2.996 0.005775)/0.3337 = 0.05185 kg m 2. Given the reliability P = 0.95 , according to the table of Student coefficients for n = 5, we find t = 2.78 and determine the absolute error ΔJ = 2.78 0.05185 = 0.1441 ≈ 0.2 kg m 2. We write the results in the form: J = (3.0 ± 0.2) kg m 2; Example 2 We calculate the temperature coefficient of resistance of the metal using the least squares method. Resistance depends on temperature according to a linear law R t \u003d R 0 (1 + α t °) \u003d R 0 + R 0 α t °. The free term determines the resistance R 0 at a temperature of 0 ° C, and the angular coefficient is the product of the temperature coefficient α and the resistance R 0 . The results of measurements and calculations are given in the table ( see table 6). Table 6
By formulas (21), (22) we determine R 0 = ¯ R- α R 0 ¯ t = 1.4005 - 0.002645 85.83333 = 1.1735 Ohm. Let us find an error in the definition of α. Since , then by formula (18) we have: . Using formulas (23), (24) we have ; 0.014126 Ohm. Given the reliability P = 0.95, according to the table of Student's coefficients for n = 6, we find t = 2.57 and determine the absolute error Δα = 2.57 0.000132 = 0.000338 deg -1. α = (23 ± 4) 10 -4 hail-1 at P = 0.95. Example 3 It is required to determine the radius of curvature of the lens from Newton's rings. The radii of Newton's rings r m were measured and the numbers of these rings m were determined. The radii of Newton's rings are related to the radius of curvature of the lens R and the ring number by the equation r 2 m = mλR - 2d 0 R, where d 0 the thickness of the gap between the lens and the plane-parallel plate (or lens deformation), λ is the wavelength of the incident light. λ = (600 ± 6) nm; then the equation will take the form y = a + bx. .The results of measurements and calculations are entered in table 7. Table 7
Approximation of experimental data is a method based on the replacement of experimentally obtained data with an analytical function that most closely passes or coincides at the nodal points with the initial values (data obtained during the experiment or experiment). There are currently two ways to define an analytic function: By constructing an n-degree interpolation polynomial that passes directly through all points given array of data. In this case, the approximating function is represented as: an interpolation polynomial in the Lagrange form or an interpolation polynomial in the Newton form. By constructing an n-degree approximating polynomial that passes close to points from the given data array. Thus, the approximating function smoothes out all random noise (or errors) that may occur during the experiment: the measured values during the experiment depend on random factors that fluctuate according to their own random laws (measurement or instrument errors, inaccuracy or experimental errors). In this case, the approximating function is determined by the least squares method. Least square method(in English Literature Ordinary Least Squares, OLS) - mathematical method, based on the definition of an approximating function, which is built in the closest proximity to the points from a given array of experimental data. The proximity of the initial and approximating functions F(x) is determined by a numerical measure, namely: the sum of the squared deviations of the experimental data from the approximating curve F(x) should be the smallest. Fitting curve constructed by the least squares method The least squares method is used: To solve overdetermined systems of equations when the number of equations exceeds the number of unknowns; To search for a solution in the case of ordinary (not overdetermined) nonlinear systems of equations; For approximating point values by some approximating function. The approximating function by the least squares method is determined from the condition of the minimum sum of squared deviations of the calculated approximating function from a given array of experimental data. This criterion of the least squares method is written as the following expression: Values of the calculated approximating function at nodal points , Specified array of experimental data at nodal points . The quadratic criterion has a number of "good" properties, such as differentiability, providing a unique solution to the approximation problem with polynomial approximating functions. Depending on the conditions of the problem, the approximating function is a polynomial of degree m The degree of the approximating function does not depend on the number of nodal points, but its dimension must always be less than the dimension (number of points) of the given array of experimental data. ∙ If the degree of the approximating function is m=1, then we approximate the table function with a straight line (linear regression). ∙ If the degree of the approximating function is m=2, then we approximate the table function with a quadratic parabola (quadratic approximation). ∙ If the degree of the approximating function is m=3, then we approximate the table function with a cubic parabola (cubic approximation). In the general case, when it is required to construct an approximating polynomial of degree m for given tabular values, the condition for the minimum sum of squared deviations over all nodal points is rewritten in the following form: - unknown coefficients of the approximating polynomial of degree m; The number of specified table values. A necessary condition for the existence of a minimum of a function is the equality to zero of its partial derivatives with respect to unknown variables . As a result, we obtain the following system of equations: Let's transform the resulting linear system of equations: open the brackets and move the free terms to the right side of the expression. As a result, the resulting system of linear algebraic expressions will be written in the following form: This system of linear algebraic expressions can be rewritten in matrix form: As a result, a system of linear equations of dimension m + 1 was obtained, which consists of m + 1 unknowns. This system can be solved using any method for solving linear algebraic equations (for example, the Gauss method). As a result of the solution, unknown parameters of the approximating function will be found that provide the minimum sum of squared deviations of the approximating function from the original data, i.e. the best possible quadratic approximation. It should be remembered that if even one value of the initial data changes, all coefficients will change their values, since they are completely determined by the initial data. Approximation of initial data by linear dependence (linear regression) As an example, consider the method for determining the approximating function, which is given as a linear relationship. In accordance with the least squares method, the condition for the minimum sum of squared deviations is written as follows: Coordinates of nodal points of the table; Unknown coefficients of the approximating function, which is given as a linear relationship. A necessary condition for the existence of a minimum of a function is the equality to zero of its partial derivatives with respect to unknown variables. As a result, we obtain the following system of equations: Let us transform the resulting linear system of equations. We solve the resulting system of linear equations. The coefficients of the approximating function in the analytical form are determined as follows (Cramer's method): These coefficients provide the construction of a linear approximating function in accordance with the criterion for minimizing the sum of squares of the approximating function from given tabular values (experimental data). Algorithm for implementing the method of least squares 1. Initial data: Given an array of experimental data with the number of measurements N The degree of the approximating polynomial (m) is given 2. Calculation algorithm: 2.1. Coefficients are determined for constructing a system of equations with dimension Coefficients of the system of equations (left side of the equation)
- index of the column number of the square matrix of the system of equations Free members of the system of linear equations ( right part equations) - index of the row number of the square matrix of the system of equations 2.2. Formation of a system of linear equations with dimension . 2.3. Solution of a system of linear equations in order to determine the unknown coefficients of the approximating polynomial of degree m. 2.4 Determination of the sum of squared deviations of the approximating polynomial from the initial values over all nodal points The found value of the sum of squared deviations is the minimum possible. Approximation with Other Functions It should be noted that when approximating the initial data in accordance with the least squares method, a logarithmic function, an exponential function, and a power function are sometimes used as an approximating function. Log approximation Consider the case when the approximating function is given by a logarithmic function of the form: It has many applications, as it allows an approximate representation of a given function by other simpler ones. LSM can be extremely useful in processing observations, and it is actively used to estimate some quantities from the results of measurements of others containing random errors. In this article, you will learn how to implement least squares calculations in Excel. Statement of the problem on a specific exampleSuppose there are two indicators X and Y. Moreover, Y depends on X. Since OLS is of interest to us from the point of view of regression analysis (in Excel, its methods are implemented using built-in functions), we should immediately proceed to consider a specific problem. So, let X be the selling area of a grocery store, measured in square meters, and Y is the annual turnover, defined in millions of rubles. It is required to make a forecast of what turnover (Y) the store will have if it has one or another retail space. Obviously, the function Y = f (X) is increasing, since the hypermarket sells more goods than the stall. A few words about the correctness of the initial data used for predictionLet's say we have a table built with data for n stores. According to mathematical statistics, the results will be more or less correct if the data on at least 5-6 objects are examined. Also, "anomalous" results cannot be used. In particular, an elite small boutique can have a turnover many times greater than the turnover of large outlets of the “masmarket” class. The essence of the methodThe table data can be displayed on the Cartesian plane as points M 1 (x 1, y 1), ... M n (x n, y n). Now the solution of the problem will be reduced to the selection of an approximating function y = f (x), which has a graph passing as close as possible to the points M 1, M 2, .. M n . Of course, you can use a high degree polynomial, but this option is not only difficult to implement, but simply incorrect, since it will not reflect the main trend that needs to be detected. The most reasonable solution is to search for a straight line y = ax + b, which best approximates the experimental data, and more precisely, the coefficients - a and b. Accuracy scoreFor any approximation, the assessment of its accuracy is of particular importance. Denote by e i the difference (deviation) between the functional and experimental values for the point x i , i.e. e i = y i - f (x i). Obviously, to assess the accuracy of the approximation, you can use the sum of deviations, i.e., when choosing a straight line for an approximate representation of the dependence of X on Y, preference should be given to the one that has the smallest value of the sum e i at all points under consideration. However, not everything is so simple, since along with positive deviations, there will practically be negative ones. You can solve the problem using the deviation modules or their squares. The latter method is the most widely used. It is used in many areas, including regression analysis (in Excel, its implementation is carried out using two built-in functions), and has long been proven to be effective. Least square methodIn Excel, as you know, there is a built-in autosum function that allows you to calculate the values of all values located in the selected range. Thus, nothing will prevent us from calculating the value of the expression (e 1 2 + e 2 2 + e 3 2 + ... e n 2). In mathematical notation, this looks like: Since the decision was initially made to approximate using a straight line, we have: Thus, the task of finding a straight line that best describes a specific relationship between X and Y amounts to calculating the minimum of a function of two variables: This requires equating to zero partial derivatives with respect to new variables a and b, and solving a primitive system consisting of two equations with 2 unknowns of the form: After simple transformations, including dividing by 2 and manipulating the sums, we get: Solving it, for example, by Cramer's method, we obtain a stationary point with certain coefficients a * and b * . This is the minimum, i.e. to predict what turnover the store will have for a certain area, the straight line y = a * x + b * is suitable, which is a regression model for the example in question. Of course, it will not allow you to find the exact result, but it will help you get an idea of \u200b\u200bwhether buying a store on credit for a particular area will pay off. How to implement the least squares method in ExcelExcel has a function for calculating the value of the least squares. It has the following form: TREND (known Y values; known X values; new X values; constant). Let's apply the formula for calculating the OLS in Excel to our table. To do this, in the cell in which the result of the calculation using the least squares method in Excel should be displayed, enter the “=” sign and select the “TREND” function. In the window that opens, fill in the appropriate fields, highlighting:
In addition, there is a logical variable "Const" in the formula. If you enter 1 in the field corresponding to it, then this will mean that calculations should be carried out, assuming that b \u003d 0. If you need to know the forecast for more than one x value, then after entering the formula, you should not press "Enter", but you need to type the combination "Shift" + "Control" + "Enter" ("Enter") on the keyboard. Some FeaturesRegression analysis can be accessible even to dummies. The Excel formula for predicting the value of an array of unknown variables - "TREND" - can be used even by those who have never heard of the least squares method. It is enough just to know some features of its work. In particular:
FORECAST functionIt is implemented using several functions. One of them is called "PREDICTION". It is similar to TREND, i.e. it gives the result of calculations using the least squares method. However, only for one X, for which the value of Y is unknown. Now you know the Excel formulas for dummies that allow you to predict the value of the future value of an indicator according to a linear trend. |