QFD: House of Quality Template

Running around the web, I could not find a quality HOQ template that was not locked, protected, or complete for my needs. So I decided to create one and share it. The attached template is created in Excel 2007 and does not contain any macros or VB code. Nothing is locked or protected, and it can be resized at will (though this may require a little cleanup afterwards). Drop down lists are generated using the Data Validation tool, in-cell charts through the use of the REPT function, cell highlighting using conditional formatting and the line charts are simple excel graphs, updated automatically, that are aligned with the cells.

Click on the picture for a larger image. You can download the MS Excel 2007 template here and the OOo 3.1.1 template here.

I place this template into the public domain. Feel free to use, modify, redistribute, etc. Though if you do find it useful, let me know.


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Calculating Mooney-Rivlin Constants

In this post we will look at the procedure for determining the Mooney-Rivlin constants from simple tensile test data of an elastomeric solid. The definition and derivation of the material model is left to others. For our purposes, all we need to know is that the material model yields a predicted engineering stress under simple tension of

where C1 and C2 are the Constants that need to determine and is the stretch ratio, defined as the ratio of the stretched length to the initial length of the sample. This can be defined in terms of the measured strain in a simple tensile test

We’ll start with a set pf published stress strain data for a 40 Shore A material from GLS corporation. The stress strain curve from the literature is shown below.

Using g3data we can extract the points and tabulate them. For simplicity, we also convert the strain to stretch ratio and the stress to SI units. The data file can be downloaded here.

Strain            Stress (psi)                                                    Stretch Ratio () Stress (Pa)      
0 0   1 0
0.5 141.2   1.5 9.74 x 105
1.0 204.9   2 1.41 X 106
2.0 299.8   3 2.07 x 106
3.0 367.6   4 2.53 x 106

Usign R to perform the regression is easy:

1. Read in the data from the file G7940.dat.

> SS_Data <- read.table("c:/G7940.dat", header = TRUE)

2. Examine the imported data.

> SS_Data
  Strain  Stress
1    0.0       0
2    0.5  974000
3    1.0 1410000
4    2.0 2070000
5    3.0 2530000

3. The "attach" command allows us to access the variables directly without having to reference the original structure.

> attach(SS_Data)

4. A quick plot of the imported data can then be generated.

> plot(Strain, Stress)

5. Since the MR model uses the stretch ratio, not the strain, we convert the strains and then plot the stress vs. stretch ratio.

> Stretch = Strain + 1
> plot (Stretch, Stress)


6. Now for the curve fitting itself. We use the "nls" function which stands for "Nonlinear Least Squares". We provide the model as given in the equation at the beginning of the post where C1 & C2 are the two constants we wish to determine, guesses for the initial values of those constants, and request that the trace is provided. Then we ask for a summary of the results. This prints the fitted values for our two constants along with some other helpful data.

> MR.fit <- nls(Stress ~ (2*C1+2*C2/Stretch)*(Stretch-1/(Stretch^2)),
+ start=list(C1=100,C2=100), trace=T)
1.361224e+13 :  100 100 
2117860971 :  239897.5 335348.9 
> summary(MR.fit)
Formula: Stress ~ (2 * C1 + 2 * C2/Stretch) * (Stretch - 1/(Stretch^2))
Parameters:
   Estimate Std. Error t value Pr(>|t|)    
C1   239898       8001   29.98 8.15e-05 ***
C2   335349      23840   14.07 0.000778 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 26570 on 3 degrees of freedom

Number of iterations to convergence: 1 
Achieved convergence tolerance: 5.802e-07 

7. Lastly, we would like to see how our curve fit matches the data. First we extract the coefficients into a vector "C" With the plot shown above still open, we add the curve and clean it up with a title. Notice the method for calling the coefficients.

> C <- coef(MR.fit)
> curve((2*C[1]+2*C[2]/x)*(x-1/(x^2)), from=1.0, to = 4.0, add=TRUE)
> title(main="Mooney-Rivlin Fit to Simple Tensile Data")


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Basic R – Univariate Dataset Graphics in R, Exploratory Data Analysis

As a continuation of this post, we continue on the analysis of univariate data by generating graphical views of the Michelson speed of light data.

The simplest method to view the distribution is the stem and leaf plot.


> stem(C$V1)

  The decimal point is 1 digit(s) to the left of the |

  2996 | 2
  2996 | 5
  2997 | 222444
  2997 | 566666788999
  2998 | 000001111111111223344444444
  2998 | 5555555566677778888888888999
  2999 | 0011233444
  2999 | 55566667888
  3000 | 000
  3000 | 7

We can also create a boxplot of the data. Notice the method to insert a mathematical expression in teh axis label.


> boxplot(C)
> title(ylab=expression(paste(m/sec," ",plain( x )," ",10^3)))
> title("Boxplot of Speed of Light")

Which gives
01

And a histogram gives us a graphical view similar to the stem an leaf.


> hist(C$V1, main="Histogram of Data", xlab=expression(paste(m/sec," ",plain( x )," ",10^3)))

02

It’s always a good idea to generate a Q-Q Plot to check for normality.


> qqnorm(C$V1)

03


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Calculating the Sensitivity of a Transfer Function to Independant Variables Using the Taylor Series

In this post we discussed the use of the Taylor Series to evaluate the propagated uncertainties in measurements to a calculated result. We can also use a similar approach to determine the sensitivity of a transfer function to know perturbations in the independent variables.

Frustum_750In this case we will look at the volume of a right conical frustum or a truncated cone. The volume can be calculated from the three variables: , and shown in the figure. The volume is given as

The first step is to calculate the partial derivatives of with respect to the independent variables.

The total uncertainty in the volume, , is given in terms of the uncertainties in the independent variables, , and .

Each term of the form is the contribution to the total perturbation of the function by the perturbation of the independent variable, . Therefore we can calculate the sensitivities, , of the volume to small changes in each of the three variables , and , by calculating the percentage contribution of each term to the total perturbation. Typically, we wish to also define the direction that the pertubations in the independent variables will affect the total, so we remove the absolute values and evaluate the partial derivatives while maintaining their signs.

It is also evident that .


Example:

Suppose we have a design for a conical frustum with nominal dimensions of 0.500 in, 0.375 in and 1.250 in. Our manufacturing process can hold to ±.002 in, to ± .007 in and to ± .010 in. We then have the following inputs to our formulae.

From this we can calculate the nominal volume of our manufactured part.

Next we can calculate the partial derivatives.

And having the partial derivatives, we can calculate the total propagated uncertainty in the nominal volume.

From this we can obtain the predicted volume of the manufactured part subject to the manufacturing tolerances: 0.757 ± .021 in3. We can then calculate the relative sensitivity of the total volume to the manufacturing tolerances specified.

It is important to note that these results are only valid for the initial conditions specified. If the nominal values for the dimensions or the tolerances change, both the uncertainties and sensitivities will change.


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Using ‘sed’ to bulk edit text files

Google Analytics provides free yet super-sophisticated website tracking and statistics. Unfortunately, it requires that the following code snippet be inserted into every web page immediately before the </body> tag.


<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try {
var pageTracker = _gat._getTracker("UA-12232036-2");
pageTracker._trackPageview();
} catch(err) {}</script>

Now with a static website such as www.elizabethpassela.com, that’s a lot of pages to manually update. What we’d like is a way to perform a bulk insert of the required text on all .htm files in the webroot directory. Here’s where ‘sed’ comes to the rescue.

First task is to create a ‘sed’ script file called sed.cmd. The first line includes the search string </body> and the /i “insert” option which tells ‘sed’ to insert the following text before the </body> string. The ” at the end of each line is needed to tell ‘sed’ that another line exists in the script, sort of a continuation operator. Since the ‘/’ character was used as the separator for the script command, all ‘/’ characters which are part of the operand need to be escaped with ”.


/</body>/i
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try {
var pageTracker = _gat._getTracker("UA-12232036-2");
pageTracker._trackPageview();
} catch(err) {}</script>

Finally, execute the following command to insert the text into all .htm files. The '-i' tells sed to operate on the files in-place (if 'sed' is run without the '-i' option, the output will be sent to stdout which is good for testing).


# sed -i -f ~cbattles/sed.cmd *.htm

A similar approach can be used for replacing and appending text as well as inserting. Additionally, the 'find command can be piped to 'sed' to operate on selective files.


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Word wrap inside
 block

In the Arras theme that I use, text contained within a <pre></pre> block was not being wrapped, causing formatting problems in I.E. (e.g. the sidebar would be forced below the post) and reducing readability. A solution was found on this site. For my solution, adding the following code to the end of /wp-content/themes/arras-theme/css/styles/default.css solved this problem quickly and easily.


pre {
 overflow-x: auto; /* Use horizontal scroller if needed; for Firefox 2, not needed in Firefox 3 */
 white-space: pre-wrap; /* css-3 */
 white-space: -moz-pre-wrap !important; /* Mozilla, since 1999 */
 white-space: -pre-wrap; /* Opera 4-6 */
 white-space: -o-pre-wrap; /* Opera 7 */
 /* width: 99%; */
 word-wrap: break-word; /* Internet Explorer 5.5+ */
}

Manually Backing Up WordPress

To backup, first create a tarball of the website


[root@Kepler Cbattles]# tar -cvjf WordPress.tar.bz2 ./blog

and then dump the MySQL database


[root@Kepler Cbattles]# mysqldump --add-drop-table -h localhost -u wordpress -p wordpress | bzip2 -c > blog.bak.sql.bz2

Put the two files in a safe place. Easy.

Security Note: These files are created in the webroot directory. This means that they are accessible by anyone. Either create them in a safe location or move them after creation.


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Basic R — Descriptive Statistics of Univariate Data

This is a basic introductory look at using R for generating descriptive statistics of a univariate data set. Here, we will use the historical dataset of Michelson’s experiment to determine the speed of light in air provided as a an ASCII file with header content and the observed speed of light for 100 trials.

We need to first read the data into R. Since the data is in a properly formatted ASCII file, we only need to tell R to ignore the first 60 lines, which is header information. R will then import the data into a list of class data.frame.


>C <- read.table("Michelso.dat",skip=60)

We can take a look at the dataset by simply typing the dataset name at the prompt. Here you can see that R automatically assigned the variable V1 to the data.


> C
        V1
1   299.85
2   299.74
3   299.90
4   300.07
...

The summary() command in R provides the summary statistics: MIn, 1st Q, Median, Mean, 3rd Q and Max. We call this function with the argument 'C$V1' which tells R to act on the named variable, V1, in the data.frame C. (The options commands set the output number formatting to something realistic.)


> options(scipen=100)
> options(digits=10)
> summary(C$V1)
    Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
299.6200 299.8075 299.8500 299.8524 299.8925 300.0700 

Standard deviation, trimmed mean and number of data points can be obtained individually.


>sd(C$V1)
[1] 0.07901054782
>mean(C$V1,trim=0.05)
[1] 299.8528889
>length(C$V1)
[1] 100

If we want to get skewness and kurtosis we'll need the fBasics package installed


> install.packages("fBasics")
> library(fBasics)
...
>skewness(C$V1, method="moment")
[1] -0.01798640563
attr(,"method")
[1] "moment"
>kurtosis(C$V1, method="moment")
[1] 3.198586275
attr(,"method")
[1] "moment"

To determine confidence intervals on the mean, we can use the one sample t-test. We can ignore the mean value to test against since in our case it is not known (or relevant for confidence interval estimation)


> t.test(C$V1, conf.level=0.99)

	One Sample t-test

data:  C$V1 
t = 37950.9329, df = 99, p-value < 0.00000000000000022
alternative hypothesis: true mean is not equal to 0 
99 percent confidence interval:
 299.8316486 299.8731514 
sample estimates:
mean of x 
 299.8524

Another method for obtaining much of this information in a single step can be found in the stat.desc() function from the pastecs package.


> install.packages("pastecs")
> library(pastecs)
...
> options(scipen=100)
> options(digits=4)
> stat.desc(C)
                        V1
nbr.val        100.0000000
nbr.null         0.0000000
nbr.na           0.0000000
min            299.6200000
max            300.0700000
range            0.4500000
sum          29985.2400000
median         299.8500000
mean           299.8524000
SE.mean          0.0079011
CI.mean.0.95     0.0156774
var              0.0062427
std.dev          0.0790105
coef.var         0.0002635

We'll look at the generation of some standard statistical plots for exploratory data analysis in a future post.


Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.

Basic Error Propagation Through the Use of Taylor Series

In courses on experimentation, propagated errors are typically treated through the use of a Taylor series expansion to evaluate the total contribution of individual measurement uncertainties to a final calculated result. As an example, suppose we wish to experimentally determine the acceleration of a body due to gravity. We could take an object and drop it a measured distance while recording the elapsed time. From basic physics we know that the distance traveled is proportional to the time squared and that the proportionality constant is or,

Solving for the acceleration, we obtain,

which is a function of two measured variables, the distance travelled and the elapsed time. Both of these measurements, no matter how carefully obtained, will have some uncertainty. Suppose we measure the distance travelled with a ruler that has graduations every 1 inches and the time with a stopwatch with a resolution of 0.1 seconds. With both of these instruments it is evident that we cannot measure the quantity to a higher resolution than the instrument provides, therefore it is typical to take the total uncertainty in the measurement as the least significant digit in the scale, centered on the measurement value. This would equate to uncertainties in the measurements of ± .5 inches and ± .05 seconds.

Let’s say we dropped the object from a height of 36 feet and measured the elapsed time as 1.5 seconds. From the above equation we would find the acceleration to be 32 . But how carefully did we measure? Was the distance exactly 36 feet (432 inches), or was it 432.3 inches? Was the time 1.48 seconds? As long as these uncertainties are within the predefined ranges established above, we can calculate the total uncertainty in the measurement of the acceleration.

In this post we discussed the approximation of any function by a Taylor series expansion about a specific point. We can apply that technique to determine how the value of the acceleration may vary with perturbations in the input values of time and distance about the measured point. For the simplest implementation, we restrict ourselves to the first order terms of the expansion[1].

Recal that the Taylor series of a function about the point is given as

Expanding this to the first order terms yields

rewriting as

We can now see that the left side of the equation evaluates to the change in the function corresponding to a perturbation of and by a small amount and . Examining the partial derivative terms, we can see that we are multiplying the rate of change of the function in a single variable to a change in that variable from the interested point . Since we are interested in small perturbations of and about the point , we will denote these changes and . The change of the function under these perturbations we will denote .

Substituting we obtain

Lastly, since there should be no preference for the uncertainty to be in the positive or negative direction, we take the absolute value of the derivative terms and require that our perturbations be defined as positive,

This can be generalized to a function in any number of variables as

Returning to our example, to find the total uncertainty in the calculated acceleration, we simply need to determine the partial derivatives of the function in the independent variables,

and insert them into our formula

Evaluating with our collected data

or

Therefore our calculated acceleration should be given as .

It should be noted that the calculated uncertainty is the worst case situation that is possible under the individual assumptions in the treatment. Alternate treatments based on statistical treatments may be more realistic. Also, another useful application of these methods is to define the sensitivity of the dependant variable to inputs to the function. These will be address in subsequent posts.

Caveat lector — All work and ideas presented here may not be accurate and should be verified before application.


[1]
2nd order and higher terms include the square and higher powers of the perturbation amount. Assuming that the perturbation is small, the square of this small perturbation is much smaller still, and higher order terms become negligible. If the relative size of the perturbation to the curvature of the function is in doubt, the magnitude of the 2nd and higher order terms should be checked.