Designation: C1067 − 12Standard Practice forConducting a Ruggedness Evaluation or Screening Programfor Test Methods for Construction Materials1This standard is issued under the fixed designation C1067; the number immediately following the designation indicates the year oforiginal adoption or, in the case of revision, the year of last revision. A number in parentheses indicates the year of last reapproval. Asuperscript epsilon (´) indicates an editorial change since the last revision or reapproval.1. Scope*1.1 This practice covers a procedure for evaluating theruggedness of a test method by determining the effects ofdifferent experimental factors on the variation of test results.The procedure is intended for use during the development of atest method before the interlaboratory study is executed, suchas those described in Practices C802 and E691.1.2 This practice covers, in general terms, techniques forplanning, collecting data, and analyzing results from a fewlaboratories. Appendix X1 provides the details of the proce-dure with an example and Appendix X2 provides additionalinformation on the methodology.1.3 The practice is not intended to give information perti-nent to estimating multilaboratory precision.1.4 The system of units for this practice is not specified.Dimensional quantities in the practice are presented only inillustrations of calculation methods.1.5 This standard does not purport to address all of thesafety concerns, if any, associated with its use. It is theresponsibility of the user of this standard to establish appro-priate safety and health practices and determine the applica-bility of regulatory limitations prior to use.2. Referenced Documents2.1 ASTM Standards:2C670 Practice for Preparing Precision and Bias Statementsfor Test Methods for Construction MaterialsC802 Practice for Conducting an Interlaboratory Test Pro-gram to Determine the Precision of Test Methods forConstruction MaterialsE456 Terminology Relating to Quality and StatisticsE691 Practice for Conducting an Interlaboratory Study toDetermine the Precision of a Test MethodE1169 Practice for Conducting Ruggedness Tests3. Terminology3.1 Definitions:3.1.1 For definitions of statistical terms used in thisstandard, refer to Terminology E456.3.2 Definitions of Terms Specific to This Standard:3.2.1 determination, n—numerical value of a characteristicof a test specimen measured in accordance with the given thetest method.3.2.2 effect, n—of a factor, the difference in the measuredcharacteristics at each level of a factor averaged over all levelsof other factors in the experiment.3.2.3 factor, n—a condition or element in the test procedureor laboratory environment that can be controlled and that is apotential source of variation of determinations.3.2.4 level, n—the value or setting of a factor associatedwith a determination.3.2.5 replication, n—the act of obtaining, under specifiedconditions, two or more determinations on identical specimens.3.2.5.1 Discussion—Replicate determinations are typicallyrequired to be obtained by the same operator, using the sameapparatus, on specimens that are similar as possible, and duringa short time interval.3.2.6 ruggedness, n—the characteristic of a test methodsuch that determinations are not influenced to a statisticallysignificant degree by small changes in the testing procedure orenvironment.3.2.6.1 Discussion—Statistical significance is evaluated bycomparing the observed variation due to a factor to theexpected variation due to chance alone.3.2.7 screening, n—a planned experiment using a low num-ber of determinations to detect among many factors those thathave a significant effect on variation of determinations com-pared with chance variation.3.2.7.1 Discussion—In this practice, the influence of sevenfactors is evaluated using a replicated set of eightdeterminations, that is, a total of 16 determinations.1This practice is under the jurisdiction of ASTM Committee C09 on Concreteand Concrete Aggregates and is the direct responsibility of Subcommittee C09.94on Evaluation of Data (Joint C09 and C01).Current edition approved July 1, 2012. Published September 2012. Originallyapproved in 1987. Last previous edition approved in 2007 as C1067 – 00 (2007).DOI: 10.1520/C1067-12.2For referenced ASTM standards, visit the ASTM website, www.astm.org, orcontact ASTM Customer Service at

[email protected] For Annual Book of ASTMStandards volume information, refer to the standard’s Document Summary page onthe ASTM website.*A Summary of Changes section appears at the end of this standardCopyright © ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959. United States14. Summary of Practice4.1 The practice requires that the user develop, from theo-retical or practical knowledge, or both, a list of factors thatplausibly would cause significant variation in test results(determinations) if the factors were not controlled. The tech-nique is limited to the analysis of the effects of seven factorsand requires1⁄6 of the determinations that would be required toevaluate seven factors in a full factorial study. Procedures existfor analysis of smaller and larger numbers of factors (see GuideE1169), but seven is a convenient number for many testmethods for construction materials. The seven-factor analysisrequires 16 determinations by each laboratory. The procedurecan be executed usefully by a single laboratory, but sometimesadditional information can be obtained if it is repeated in oneor two additional laboratories.4.2 The procedure requires that two levels of each factor beidentified, and 16 determinations be obtained with prescribedcombinations of factor levels. The levels assigned to a factormay be quantitative or qualitative (for example, 20°C versus25°C or brass versus steel).4.3 After data are acquired, a statistical procedure is appliedto establish which of the factors under study have a statisticallysignificant effect on test results.5. Significance and Use5.1 The purpose of a ruggedness evaluation, or screeningprogram, is to determine the sensitivity of the test method tochanges in levels of pertinent operating factors using a smallnumber of tests. Normally, operating conditions for a testmethod are defined along with allowable tolerances. A rugged-ness analysis determines the effect of “worst-case” variation inoperating conditions within the specified tolerances. If theruggedness evaluation indicates high variation (poorprecision), the method can be revised with smaller toleranceson operating conditions to improve the precision.5.2 This practice evaluates the effects of seven factors usingeight treatments. The disadvantage of this approach is that itonly estimates the main effects of the factors and does notdetect the effects of interactions among factors. For this reason,this is a screening program and additional investigation isrequired to investigate whether there are interaction effects.5.3 A major reason for poor precision in test methods is thelack of adequate control over the sources of variation in testingprocedures or testing environments. These sources of variationoften are not controlled adequately because they were notidentified during the development of the test procedures ashaving a large effect on the determinations. This practiceprovides a systematic procedure to establish the requireddegree of control for different testing parameters.5.4 All new test methods must be subjected to an interlabo-ratory program to develop a precision and bias statement.These programs can be expensive and lengthy, and the resultmay show that the method is too variable and should not bepublished without further revision. Interlaboratory studies maygive the subcommittee an indication that the method is toovariable, but they do not usually give a clear picture of thecauses of the variation. Application of this practice using oneor two laboratories before finalizing the test method andconducting the interlaboratory study is an economical way todetermine these causes.5.5 Many existing test methods were developed before therewas a requirement for precision and bias statements. Since thisbecame a requirement, most of these test methods havedeveloped precision and bias statements, and the result is thatmany have been found to suffer from relatively large amount ofvariation. This practice provides a relatively simple and eco-nomical way to investigate the causes of variation in testmethods, so that a subcommittee will have some guidance as towhich parts of the test method need to be revised.5.6 The procedure can be used for a screening programwithin a single laboratory, but involvement of at least threelaboratories is recommended, particularly if the single labora-tory were to be the one that developed the test method. This isparticularly important for new test methods. The originatinglaboratory is so much a part of the development of the testmethod that it is difficult for it to be objective in spotting anyproblems in the clarity of the test method directions. Twoadditional laboratories will probably contribute fresh criticalreview of the validity of the test method and provide assistancein clarifying the instructions of the test method when needed.This practice, however, is not intended to provide informationon multilaboratory precision, but it does provide some infor-mation on single-operator precision, which could be used todevelop a temporary repeatability statement until the interlabo-ratory study is completed.6. Materials6.1 The number and types of material shall cover the rangeof material properties to which the test method is applicable.The test method may not apply to material types or propertyvalues outside the range evaluated. Three to five materials withdifferent properties will usually be sufficient.6.1.1 Some preliminary testing may help the laboratoriesinvolved determine the materials that will be used in thescreening program.7. Procedure7.1 Determine the number of laboratories that will partici-pate in the screening program and which materials each willuse. The maximum amount of information is obtained if alllaboratories include all materials in their part of the program,however, cost can be reduced if each laboratory uses a differentmaterial. In this case, caution must be exercised in interpretingthe results because laboratory-dependent effects cannot beseparated from material-dependent effects.7.2 Factors that are likely to have the greatest effect on thevariability of the determinations are selected for study. Levelsof these factors are determined by selecting the minimum andmaximum levels that would plausibly occur in the execution ofthe test method if there were no particular efforts to controlthem. Levels often represent quantitative factors, such astemperature or pressure, but they may also represent qualitativefactors, such as old versus new or wet versus dry. Only twolevels are allowed for each factor. In this practice, factors areC1067 − 122assigned letter designations, A through G, and the two levels ofeach factor are designated with upper and lower cases of theseletters, as shown in Table 1.NOTE 1—In textbooks dealing with design of experiments, factor levelsare often denoted with plus (+) and minus (-) signs.7.3 Assign combinations of factor levels to each determina-tion according to Table 1. The eight determinations will bereplicated; therefore, the full study on each material willrequire 16 determinations. Run the 16 determinations inrandom order.7.4 To analyze the results, construct a 16 row by 16 columnresults matrix composed of 61 values as shown in Table 2. Thevalues in row 1 are all +1. The values in rows 2 to 8 for eachreplicate set correspond to the high and low settings of thefactors as given in Table 1. The pattern in rows 1 to 8 of thefirst replicate set is repeated for rows 9 to 16 of the secondreplicate set. For rows 9 to 16 of the second replicate set, thesigns are reversed from those in the first set. The variouscombinations of plus and minus values in Table 2 are appliedto the values of the 16 determinations and various sums of thesigned determinations are calculated. For each row of Table 2,calculate the Z and W statistics using Eq 1 and 2.Zr5(i5116αridi(1)Wr5Zr216(2)where:r = row number as shown in Table 2, where r =1to16,i = determination number ranging from 1 to 16,αri= +1 or -1 as defined in Table 2 for each row number anddetermination number, anddi= determination number i as defined in Table 1.7.5 The Z-statistic for row 1 (Z1) represents the sum of the16 determinations and Z1/16 is the overall average of the 16determinations. The Z-statistics for rows 2 through 8 (Z2through Z8) are related to the effects of each of the sevenfactors (see Note 2). These values of Z represent the differencesbetween the sum of the determinations at the high level of thefactor and the sum of the determinations at the low level of thefactor. The Z-values are divided by eight to obtain the effect ofeach factor averaged of over the levels of the other factors. Forexample, Z3/8 is the average effect of factor B as it is variedfrom the low level to the high level.NOTE 2—A positive value for an effect of a factor means that theresponse increases as the factor level is changed from its low level to itshigh level. The opposite is the case for a negative effect. Recall that aneffect is the difference between the average of the determinations at thehigh setting minus the average at the low setting of the factor.7.6 The W values are various mean squares. W1is the meanof the square of the sum of all determinations and is not usedin this analysis. The values W2to W8are the mean squares foreach factor and are compared with the random error (see Note3). The W values for rows 9 through 16 (W9to W16) are usedto calculate the error variance (s2) according to Eq 3 (see Note4).s25(r5916Wr8(3)NOTE 3—Appendix X2 provides additional information of the meaningof the term “mean squares.”NOTE 4—The error variance s2is the pooled variance of the tworeplicate determinations for each of the eight conditions.7.7 To establish whether a factor has a statistically signifi-cant effect on the results, compute the F statistic for each factorusing Eq 4.Ff5Wrs2(4)TABLE 1 Pattern of Assigning LevelsAto Seven FactorsDeterminationNumberFactorABCDEFG1(9)BabcDEFg2 (10) a b C D e f G3(11) a B c d E f G4 (12) a B C d e F g5 (13) A b c d e F G6 (14) A b C d E f g7 (15) A B c D e f g8 (16) A B C D E F GALower case letter indicates one level for the factor and upper case letter indicatesthe other level.BThe numbers in parentheses refer to the determinations in replicate set 2.TABLE 2 Matrix of Signs to be Applied to 16 Determinations (d1to d16) to Calculate Z- and W-StatisticsSign Applied to Each Determination in Computing ZiEight Determinations for Replicate Set 1 Eight Determinations for Replicate Set 2row1 2345 6 789101 12 13 141516ZW1 1111 1 111 Z1W12-1 -1-1-1 1 1 11-1 -1 -1 -1 1 1 11Z2W23-1-11 1-1 -1 11-1 -1 1 1 -1 -1 1