Abstract
Motivated by problems in metrology, we consider a numerical evaluation program y = f(x) as a model for a measurement process. We use a probability density function to represent the uncertainties in the inputs x and examine some of the consequences of using Automatic Differentiation to propagate these uncertainties to the outputs y.We show how to use a combination of Taylor series propagation and interval partitioning to obtain coverage (confidence) intervals and ellipsoids based on unbiased estimators for means and covariances of the outputs, even where f is sharply non-linear, and even when the level of probability required makes the use of Monte Carlo techniques computationally problematic.
Original language | English |
---|---|
Pages (from-to) | 47-58 |
Journal | Lecture Notes in Computational Science and Engineering |
Volume | 50 |
DOIs | |
Publication status | Published - 2006 |