next up previous contents
Next: Canonical variances Up: Block Designs Previous: Resolutions   Contents


Statistical Properties

For a statistician, a block design is a plan for an experiment. The $v$ points of the block design are usually called treatments, a general terminology encompassing any set of v distinct experimental conditions of interest. The purpose of the experiment is to compare the treatments in terms of the magnitudes of change they induce in a response variable, call it $y$. These magnitudes are called treatment effects.

In a typical experiment (there are many variations on this, but we stick to the basics to start), each treatment is employed for the same number $r$ of experimental runs. Each run is the application of the treatment to an individual experimental unit (also called plot) followed by the observation of the response $y$. An experiment to compare $v$ treatments using $r$ runs (or ``replicates'') requires a total of $vr$ experimental units.

If the $vr$ experimental units are homogeneous (for the purposes of the experiment, essentially undifferentiable) then the assignment of the $v$ treatments, each to $r$ units, is made completely at random. Upon completion of the experiment, differences in treatment effects are assessed via differences in the $v$ means of the observed values $y$ for the $v$ treatments (each mean is the average of $r$ observations). This simplest of experiments is said to follow a completely randomized design (it is not a block design).

The concept of a blocked experiment comes into play when the $vr$ experimental units are not homogeneous. A block is just a subset of the experimental units which are essentially undifferentiable, just as described in the previous paragraph. If we can partition our $vr$ heterogeneous units into $b$ sets (blocks) of $k$ homogeneous units each, then after completion of the experiment, when the statistical analysis of results is performed, we are able to isolate the variability in response due to this systematic unit heterogeneity.

To make clear the essential issue here, consider a simple example. We have $v=3$ fertilizer cocktails (the treatments) and will compare them in a preliminary greenhouse experiment employing $vr=6$ potted tobacco plants (the experimental units). If the pots are identically prepared with a common soil source and each receiving a single plant from the same seed set and of similar size and age, then we deem the units homogeneous. Simply randomly choose two pots for the application of each cocktail. This is a completely randomized design. At the end of the experimental period (two months, say) we measure $y$ = the total biomass per pot.

Now suppose three of the plants are clearly larger than the remaining three. The statistically ``good'' design is also the intuitively appealing one: make separate random assignments of the three cocktails to the three larger plants, and to the three smaller plants, so that each cocktail is used once with a plant of each size. We have blocked (by size) the 6 units into two homogeneous sets of 3 units each, then randomly assigned treatments within blocks. Notice that there are 3!$\times$3!=36 possible assignments here; above there were 6!=720 possible assignments. Because $k=v$ this is called a complete block design.

The statistical use of the term ``block design'' should now be clear: a block design is a plan for an experiment in which the experimental units have been partitioned into homogeneous sets, telling us which treatment each experimental unit receives. The external representation is a bit less specific: each block of a block design in external representation format tells us a set of treatments to use on a homogeneous set (block) of experimental units but without specifying the exact treatment-to-unit map within the block. The latter is usually left to random assignment, and moreover, does not affect the standard measures of ``goodness'' of a design (does not affect the information matrix; see below), so will not be mentioned again.

There are solid mathematical justifications for why the complete block design in the example above is deemed ``good,'' which we develop next. This development does not require that $k=v$, nor that the block sizes are all the same, nor that each treatment is assigned to the same number of units. However, it does assume that the block sizes are known, fixed constants, as determined by the collection (of fixed size) of experimental units at hand. Given the division of units into blocks, we seek an assignment of treatments to units, i.e. a block design, that optimizes the precision of our estimates for treatment effects. From this perspective, two different designs are comparable if and only if they have the same $v$, $b$, and block sizes (more precisely, block size distribution).

Statistical estimation takes place in the context of a model for the observations $y$. Let $y_{ij}$ denote the observation on unit $i$ in block $j$. Of course we must decide what treatment is to be placed on that unit - this is the design decision. Denote the assigned treatment by $d[i,j]$. Then the standard statistical model for the block design (there are many variations, but here this fundamental, widely applicable block design model is the only one considered) is


\begin{displaymath}
y_{ij} = \mu + \tau_{d[i,j]} + \beta_j + e_{ij}
\end{displaymath}

where $\tau$ is the treatment effect mentioned earlier, $\beta_j$ is the effect of the block (reflecting how this homogeneous set of units differs from other sets), $\mu$ is an average response (the treatment and block effects may be thought of as deviations from this average), and $e_{ij}$ is a random error term reflecting variability among homogeneous units, measurement error, and indeed whatever forces that play a role in making no experimental run perfectly repeatable. In this model the $e_{ij}$'s have independent probability distributions with common mean 0 and common (unknown) variance $\sigma^{2}$.

With $n$ the total number of experimental units in a block design, the design map $d$ (note: symbol $d$ is used both for the map and the block design itself) from plots to treatments can be represented as an $n
\times v$ incidence matrix, denoted $A_d$. Also let $N_d$ be the $v
\times b$ treatment/block incidence matrix, let $K$ be the diagonal matrix of block sizes ($=kI$ for equisized blocks), and write


\begin{displaymath}
C_d = A'_d A_d - N_d K^{-1} N'_d
\end{displaymath}

which is called the information matrix for design $d$ (note: $A'$ denotes the transpose of a matrix $A$). Why this name? Estimation focuses on comparing the treatment effects: every treatment contrast $\sum c_i \tau_i$ with $\sum c_i =0$ is of possible interest. All contrasts are estimable (can be linearly and unbiasedly estimated) if and only if the block design is connected. For disconnected designs, all contrasts within the connected treatment subsets span the space of all estimable contrasts. For a given design $d$, we employ the best (minimum variance) linear unbiased estimators for contrasts. The variances of these estimators, and their covariances, though best for given $d$, are a function of $d$. In fact, if $c$ is the vector of contrast coefficients $c_i$ then the variance of contrast $c' \tau=\sum c_i\tau_i$ is


\begin{displaymath}
\sigma^2 c' C^{+}_d c
\end{displaymath}

where $C^{+}_d$ is the Moore-Penrose inverse of $C_d$ (if $C_d=\sum x_{di}E_{di}$ is the spectral decomposition of $C_d$, then $C_d^+=\sum_{x_{di}\neq 0} \frac{1}{x_{di}}E_{di}$). The information carried by $C_d$ is the precision of our estimators: large information $C_d$ corresponds to small variances as determined by $C^{+}_d$.

We wish to make variances small through choice of $d$. That is, we choose $d$ so that $C^{+}_d$ is (in some sense) small. Design optimality criteria are real-valued functions of $C^{+}_d$ that it is desirable to minimize. Obviously a design criterion may also be thought of as a function of $d$ itself, which we do when convenient.

With this background, let us turn now to what has been implemented for the external representation of statistical_properties:

statistical_properties = element statistical_properties {
    attribute precision { xsd:positiveInteger } ,
    canonical_variances ? ,
    pairwise_variances ? ,
    optimality_criteria ? ,
    other_ordering_criteria ? ,
    canonical_efficiency_factors ? ,
    functions_of_efficiency_factors ? ,
    robustness_properties ?
}

The elements of statistical_properties are quantities which can be calculated starting from the information matrix $C_d$.



Subsections
next up previous contents
Next: Canonical variances Up: Block Designs Previous: Resolutions   Contents
Peter Dobcsanyi 2003-12-15