This page shows how uncertainty in a measured quantity will propagate through a mathematical expression involving that quantity.
Whenever calculations are done using imprecise numbers, then the numbers resulting from the calculations are also imprecise. The precision (expressed as the "standard error") of the result from evaluating any function f(x) depends on the precision of x, and on the derivative of the function with respect to x.
When two or more variables appear together in a function f(x,y), the precision of the result depends on:
Correlated fluctuations most commonly arise when the two variables are parameters resulting from a curve-fit. A good curve-fitting program should produce the error-correlation between the parameters as well as the standard error of each parameter. (Check out my non-linear least squares curve fitting page.)
If you're interested in how this page does what it does, read the Techie-Stuff section, at the bottom of this page.
This sections below perform all the required calculations for a function of one or two variables. Just enter the numbers and their standard errors (and error-correlation, if known), and click the Propagate button.
Operators: + - * / and parentheses
Constants: Pi (=3.14...), e (=2.718...), Deg(=180/Pi = 57.2...)
[Unless otherwise indicated, all functions take a single numeric argument, enclosed in parentheses after the name of the function.]
Algebraic: Abs, Sqrt, Power(x,y) [= x raised to power of y)], Fact [factorial]
Transcendental: Exp, Ln [natural], Log10, Log2
Trigonometric: Sin, Cos, Tan, Cot, Sec, Csc
Inverse Trig: ASin, ACos, ATan, ACot, ASec, ACsc
Hyperbolic: SinH, CosH, TanH, CotH, SecH, CscH
Inverse Hyp: ASinH, ACosH, ATanH, ACotH, ASecH, ACscH
Statistical: Norm, ChiSq(x,df), StudT(t,df), FishF(F,df1,df2)
Inverse Stat: ANorm, AChiSq(p,df), AStudT(p,df), AFishF(p,df1,df2)
Note: The trig functions work in radians. For degrees, multiply or divide by the Deg variable. For example: Sin(30/Deg) will return 0.5, and ATan(1)*Deg will return 45.
Note: The factorial function is implemented for all real numbers. For non-integers its accuracy is about 6 significant figures. For negative integers it returns either a very large number or a division-by-zero error.
Note: The statistical functions Norm and StudT return 2-tail p-values (eg: Norm(1.96)=0.05), while ChiSq and FishF return 1-tail values. This is consistent with the way these functions are most frequently used.
My error-propagation web page takes a very general approach, which is valid for addition, multiplication, and any other functional form.
For propagating an error through any function of a single variable: z = F(x),
the rule is fairly simple:
The standard error (SE) of z is obtained by multiplying the SE of x by the derivative of F(x) with respect to x (ignoring the sign of the derivative).
Now it would be hellishly difficult to have my web page attempt to perform symbolic differentiation of whatever function you typed in. So instead, it obtains a numerical estimate of the derivative if F(x) by the method of "finite differences". It takes the value of x that you provided, adds the value of the standard error that you provided, and then evaluates the function you typed in at this value and saves the resulting value of the function. Then it subtracts the standard error from the x value you entered, and evaluates the function at this value. It then takes the difference between the two evaluated function values, divides it by the difference between the two x values at which it evaluated the function (which happens to be equal to twice the standard error), and this ratio is a very good approximation to the derivative. It takes the absolute value of this derivative, and then multiplies it by the standard error you provided, and that's the resulting standard error of z that the web page reports out. Actually, the program is able to simply the formulas a little bit, but basically that's how it's done.
For a function of two variables: z = F(x,y), the rule is a little more complicated. If the random errors in x and y are independent (that is, uncorrelated with each other), then the rule is:
I obtain the partial derivatives by the same "finite differences" technique.
If the random fluctuations in x and y are correlated with each other (which usually happens only if they x and y have been obtained from the same set of measurements, such as, for example, if x and y are two parameters that have been obtained from a curve-fit to a set of measured data), then the formulas are a little more complicated -- you have to add in cross-product terms involving the partial derivatives and the correlation coefficient between the random errors in x and y.
All this may seem abstract, but it turns out that it is a very general approach -- it automatically accomplishes the same thing as the usual "special case" formulas:
Return to the Interactive Statistics page or to
the JCP Home Page
Send e-mail to John C. Pezzullo (this page's author) at email@example.com