Basic Error Propagation Through the Use of Taylor Series

In courses on experimentation, propagated errors are typically treated through the use of a Taylor series expansion to evaluate the total contribution of individual measurement uncertainties to a final calculated result. As an example, suppose we wish to experimentally determine the acceleration of a body due to gravity. We could take an object and drop it a measured distance while recording the elapsed time. From basic physics we know that the distance traveled is proportional to the time squared and that the proportionality constant is or,

Solving for the acceleration, we obtain,

which is a function of two measured variables, the distance travelled and the elapsed time. Both of these measurements, no matter how carefully obtained, will have some uncertainty. Suppose we measure the distance travelled with a ruler that has graduations every 1 inches and the time with a stopwatch with a resolution of 0.1 seconds. With both of these instruments it is evident that we cannot measure the quantity to a higher resolution than the instrument provides, therefore it is typical to take the total uncertainty in the measurement as the least significant digit in the scale, centered on the measurement value. This would equate to uncertainties in the measurements of ± .5 inches and ± .05 seconds.

Let’s say we dropped the object from a height of 36 feet and measured the elapsed time as 1.5 seconds. From the above equation we would find the acceleration to be 32 . But how carefully did we measure? Was the distance exactly 36 feet (432 inches), or was it 432.3 inches? Was the time 1.48 seconds? As long as these uncertainties are within the predefined ranges established above, we can calculate the total uncertainty in the measurement of the acceleration.

In this post we discussed the approximation of any function by a Taylor series expansion about a specific point. We can apply that technique to determine how the value of the acceleration may vary with perturbations in the input values of time and distance about the measured point. For the simplest implementation, we restrict ourselves to the first order terms of the expansion[1].

Recal that the Taylor series of a function about the point is given as

Expanding this to the first order terms yields

rewriting as

We can now see that the left side of the equation evaluates to the change in the function corresponding to a perturbation of and by a small amount and . Examining the partial derivative terms, we can see that we are multiplying the rate of change of the function in a single variable to a change in that variable from the interested point . Since we are interested in small perturbations of and about the point , we will denote these changes and . The change of the function under these perturbations we will denote .

Substituting we obtain

Lastly, since there should be no preference for the uncertainty to be in the positive or negative direction, we take the absolute value of the derivative terms and require that our perturbations be defined as positive,

This can be generalized to a function in any number of variables as

Returning to our example, to find the total uncertainty in the calculated acceleration, we simply need to determine the partial derivatives of the function in the independent variables,

and insert them into our formula

Evaluating with our collected data


Therefore our calculated acceleration should be given as .

It should be noted that the calculated uncertainty is the worst case situation that is possible under the individual assumptions in the treatment. Alternate treatments based on statistical treatments may be more realistic. Also, another useful application of these methods is to define the sensitivity of the dependant variable to inputs to the function. These will be address in subsequent posts.

Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

2nd order and higher terms include the square and higher powers of the perturbation amount. Assuming that the perturbation is small, the square of this small perturbation is much smaller still, and higher order terms become negligible. If the relative size of the perturbation to the curvature of the function is in doubt, the magnitude of the 2nd and higher order terms should be checked.

Fixing the in Post Thumbnails in Arras Theme

Solution was found on the Arras forums. This link explans a small coding change to the filter.php file located at wp-content/themes/arras-theme/library.

The following code is on line 103 of filters.php:

$lead = get_post_meta($post->ID, ARRAS_POST_THUMBNAIL, true);
if ( $lead) {

replace with the following:

$lead = get_post_meta($post->ID, ARRAS_POST_THUMBNAIL, true);
if ( $lead && arras_get_option('single_thumbs')) {

That will get the “Post Thumbnail” radio button to work again…

Gnuplot + Imagemagik for Web Graphs

To generate the Taylor series graph shown in this post, the base code was generated using Gnuplot with the latex terminal option. The following Gnuplot input file was used to create the code:

set term latex 
set output "graph.tex"
set xrange [-1:3]
set yrange [-2:10]
set key off
set xtics 0
set ytics 0
set border 0 0 
set xzeroaxis
set yzeroaxis
set xtics axis out ("" -0.5, "0.5" 0.5, "" 1, "" 1.5, "" 2, "" 2.5)
set ytics axis out ("" -1, "" 1, "" 3, "" 5, "" 7, "" 9) 
set label 1 " n = 0 " at 2.35, 1.6, 0 
set label 2 " n = 1 " at 1.9, 7, 0
set label 3 " n = 2 " at 1.6, 9.6, 0 
set label 4 " n = 3 " at 2.7, 7.2, 0
set label 5 " 5x^2 - x^4 " at 1.5,3,0  
show label
set title "1st Four Terms of the Taylor Series Expansion of $f(x) = 5x^{2} - x^{4}$" 
show title
plot 5*x**2 - x**4 lt 3 lw 2, 5*0.5**2-0.5**4 lt 4 lw 1, 5*0.5**2-0.5**4+(x-0.5)*(10*0.5-4*0.5**3) lt 4 lw 1, 5*0.5**2-0.5**4+(x-0.5)*(10*0.5-4*0.5**3)+(((x-0.5)**2)/2)*(10-12*0.5**2) lt 4 lw 1, 5*0.5**2-0.5**4+(x-0.5)*(10*0.5-4*0.5**3)+(((x-0.5)**2)/2)*(10-12*0.5**2)+(((x-0.5)**3)/6)*(0-24*0.5) lt 4  lw 1

Linestyles are limited in the latex terminal, but the beauty of this method is that any mathematical formulae can now be rendered in . The output only includes the {picture} block, so the following lines need to be prepended to the output using your favorite editor:


And finally


should be appended.

Any editor may be used to render the output. I used Texmaker 1.9.1 as it was the default installed application on my workstation. Output rendering was to PDF and converted to PNG using ImageMagik by the command:

convert -density 288 graph.pdf -resample 144 -trim +repage graph.png

This ports the PDF through Ghostscript and resizes it from 72dpi to 144dpi while providing some anti aliasing. Final touch ups were performed with the GIMP and the image was posterized to 8 color levels using a level of 3 to reduce the size. Final image size is 9.5k.

The pstricks terminal may be able to do better. Another option is the epslatex terminal which combines postscript output with rendering.

Caveat lector — All work and ideas presented here may not be necessarily provide the same results for another user. The methods appearing here happen to work for me.

Taylor Series Approximation of a Function

Under certain conditions we may approximate an analytic function about a specified point on the function by an infinite series. The most useful series for our purposes is the Taylor series.

The Taylor series of a function about the point is given as

The equation above allows us to approximate the value of the function as an infinite series for any point sufficiently close to while only knowing the value of the function and its’ derivatives at . As an example, consider the equation. This is an inverted “W”-shaped function with roots at 0,. We are interested in a region centered about , so we may begin by evaluating the above equation with and for increasing values of .

For ,

For ,

For ,


For our 4th order function, any values of will result in the derivative being equal to . Therefore, we have a finite number of terms in the full Taylor series expansion up to a maximum of . The following graph shows the original function and the Taylor series approximations for . In this case, the series obtained when is algebraically equivalent to our original function.


The Taylor series may also be defined for multivariate functions. This allows us to expand the usefulness of the series to functions of multiple variables, and as we shall see later, allow us to predict the function values for small deviations about a nominal point, as well as ascribe sensitivity of the function to the independent variables.

The Taylor series of a function about the point is given as

We’ll use this later as we begin to discuss error propagation.

WordPress and memory management

I’m running LAMP and WordPress2.8.6 on a Sun Fire X2100 64-bit Opteron 146 server. When I first setup this server, it was for static html and light duty email & ftp usage, so the base 512M memory worked fine. On the initial install of WordPress and for a few days after, everything looked like it was running smoothly. Recently though, I began noticing a large delay in processing web pages so I took a look at the memory usage. Well I was maxed out on physical RAM and using another 500M or so of swap. Load averages during my own usage would shoot up to 6 or so. So apparently PHP was swapping out every time a dynamic request came through. Firstly, the automatic YUM daemon was utilizing 30% of my physical RAM, so I shut that off.

# chkconfig --levels 0123456 /etc/init.d/yum-updatesd off

but I could still see my memory usage creep up and then start to swap. So a quick trip to purloin some DDR1 unbuffered ECC memory later, 2G is installed and everything is running beautifully. 770M usage in core after the reboot with alot of headroom should hopefully fix the issue for good.

Now one of the questions may be why did I have memory issues in the first place? 512M should be plenty for a LAMP server running Postfix+Spamassasin. And if I had set Fedora up as a bare bones server install from the beginning, those questions would be valid. But I’m also running the full Gnome desktop and its associated bloat. But 2G ought to keep everyone happy. When I get around to upgrading from FC7, I’ll strip the system down to bare essentials.

Treating Attribute Data

When speaking of attribute data in this case we are concerning ourselves with the measurement of of a certain quantity which can take on one of two values: TRUE or FALSE; 0, 1; Pass, Fail; Heads, Tails; etc. — a binary output. Probability theory gives us a simple tool for the analysis of such a system: The Binomial Distribution.

Note: The binomial distribution is valid for sampling with replacement. It is a good approximation for sampling without replacement when the parent population is large. For small populations, the hypergeometric distribution should be used.

The binomial distribution is defined by the PMF

which gives the discrete probability of n outcomes in N trials where N is the number of independent experiments yielding a binary result, n is the number of occurrences of a specified binary outcomes in N trials and p is the explicit probability that the specified outcome will occur in a single trial.

When performing experimentation to determine the probability of a product failure rate or to verify that a product will meet a specified reliability rating, we are more interested in the probabilities that the specified number of outcomes or less will occur. For this we use the cumulative probability function for a binomial distribution, defined in our case as the probability that n or less outcomes will occur. This is simply the sum of the probabilities that x outcomes will occur for

With this background, lets layout the problem. A newly designed product must meet a certain reliability rating which is defined as a maximum percentage of unit failures during operation. Our goal is to determine, through testing, and to a certain confidence level, whether the product meets this criteria. For this, we need to approach the binomial distribution from the inside-out. What we are actually specifying when we define a reliability criteria is the probability that the device will fail, or p in our equations above. Much like with a six-sided die that has a probability of presenting a given number on each roll and will, on average, present that number one out of six times over a large number of trials, we wish our product to have a specified probability (typically low) of failure. The question then becomes: How do we test for this?

Having defined the desired probability of failure of the parent population, p, and realizing that we will accept probabilities that are lower, but not higher, , we can see that the cumulative probability will give the probability of n or less failures occurring in N trials of a product that has a probability of failure of p. Thus the cumulative probability yields the likelihood that we will see n or less failures of the product purely by chance. Therefore is the probability that n or less failures would present in N trials where the probability of failure for each trial is p. We call this value our confidence.

Now we can specify our confidence that the probability of failure of each individual device id p. The question then becomes: How many trials, N, must be run and how many failures, n, are allowable in those trials? By specifying either N or n, the other can be found directly from the formula. For example, if we specify that we want to determine to a 99% confidence that 5% or less of the devices in the parent population will fail during operation () and we are willing to allow one failure during our testing (), then the planned number of test samples needs to be 130.

Unfortunately, the total number of trials cannot necessarily be specified in advance, because the laws of chance may dictate that a failure occurs in the first few samples. Therefore it is prudent to plan for at least one failure during testing. We can then compute the number of trials required for occurrences of failure of both zero and one device and may halt the testing if no failures have been recorded after the lower number of trials. In the example above, that would equate to 90 test samples.

It is useful to create a table similar to that shown below to better understand the testing requirements of a given situation. This table is set up in a slightly different format to allow the problem to be stated in a slightly different fashion. Here we look at the confidence, , in the probability that the failure will not occur, , if we see a number of observed failures, n, in the total opportunities, N.

Table of Required Samples for Attribute Testing
Tableof Required Samples for Attribute Testing

Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Changes on setting up WordPress

Firstly,  I’d like to congratulate the authors on a extremely simple install process.  I’ve tried other Tomcat based CMS software and it was always an exercise in futility to keep them up and running properly.  PHP in the LAMP architecture seems to be the way to go.

Since I was setting this up on my own server, I decided to use a trick that keeps file management easier for me and to install WP to my home directory and simply mount it under the /var/www/html/webroot directory.

As root…

# mount --bind /var/www/html/webroot/blog/ /home/username/wordpress

and add

/var/www/html/webroot/blog/ /home/username/wordpress none bind 0 0

to /etc/fstab