Discussion of “Response surface design evaluation and comparison”
Journal of Statistical Planning and Inference , Volume 139 - Issue 2 p. 657- 659
Selecting the best possible experimental design for a given situation is not a simple matter. This is because there are a lot of criteria that ought to be taken into account when choosing one out of many alternative design options. In their article, the authors focus on the use of graphical methods for comparing experimental designs. In particular, the article reviews most of the literature on variance dispersion graphs and fraction of design space plots. It is explained that sophisticated variance dispersion graphs have been proposed in the literature for assessing model robustness, for evaluating split-plot designs, for evaluating the impact of measurement error on mixture designs, and for mixture experiments involving process variables. There is no doubt that variance dispersion graphs and fraction of design space plots are very useful tools for comparing alternative design options. Oftentimes, experimental design options are selected using some design optimality criterion, such as the estimation-based -optimality criterion and the prediction-based - and -optimality criteria (see, e.g., [Atkinson and Donev, 1992] and [Myers and Montgomery, 2002]). Rightly so, such an approach is often criticized because the design selection is then based on one-number summaries of the properties of the design options. The -optimality criterion, for example, favours experimental designs that have the smallest maximum prediction variance over the region of interest without considering the distribution of the magnitude of the prediction variance throughout that region. This shortcoming of the -optimality criterion is overcome by variance dispersion plots and fraction of design space plots, which provide a detailed picture of the predictive quality of experimental designs throughout the entire region of interest. That software packages such as Design Expert and JMP have implemented similar graphical methods for evaluating the predictive performance of experimental designs should therefore be lauded. One regrettable aspect of the vast literature on the construction of variance dispersion graphs and fraction of design space plots is the emphasis on the use of scaled prediction variances.