Guidelines for Statistical Analysis and Data Presentation
Basic philosophy -- These rules and suggestions proceed from two principles. (1) Authors are free to perform and interpret statistical analyses as they see fit. (2) The reader needs to be provided information sufficient for an independent assessment of the appropriateness of the method. Thus, the assumptions and (or) the model underlying unusual statistical analyses must be clearly stated and results must be sufficiently detailed. On occasion, more detail than warranted for the final publication may have to be provided to reviewers to allow them to make an informed judgment. The purpose of statistical analysis is to increase the conciseness, clarity and objectivity with which results are presented and interpreted, and where an analysis does not serve those ends it probably is inappropriate.
Data description -- Sampling designs, experimental designs, data-collection protocols, precision of measurements, sampling units, experimental units, and sample sizes must be clearly described. Reported information usually includes the sample size and some measure of the precision (standard errors or specified confidence intervals) of estimates, although this may not be necessary or possible in all instances especially for unusual statistics. Graphical data presentation is encouraged. Carefully composed graphs often permit the reader to decide at a glance if data are in danger of violating statistical assumptions.
Assumptions -- It is important that the author be satisfied that the assumptions behind any statistical analysis are sufficiently met and that, at least where unusual assumptions are made, unusual procedures are used, or unusual types of data are involved, and that the reader be provided with sufficient information to judge whether any departures from assumptions are severe enough to vitiate the conclusions. The amount of detail provided in any particular instance will depend on the centrality of the statistical test to the conclusions.
Reporting of analyses -- The specific statistical procedure must always be stated. If a statistics program or program package was used, a complete citation (including version number) should be given. If necessary, the author should indicate which procedure within a package was used and which method within a procedure was chosen. Such citations may be even more important for reviewers than they are for readers. Unusual statistical procedures need to be explained in sufficient detail, including references if appropriate, for the reader to reconstruct the analysis. To denote levels of significance, actual P values are generally more informative than symbols such as * and **.
If conclusions are based on an analysis of variance or regression, information sufficient to permit the construction of the full analysis of variance table (at least degrees of freedom, the structure of F-ratios, and P values) must be presented or be clearly implicit. Where ambiguity is possible, the authors must indicate which effects were considered fixed or random and why.
Effect size and biological importance must not be confused with statistical significance. Power analyses (determination of type II error rates, ß) occasionally can be very useful, especially if used in conjunction with descriptive procedures like confidence intervals. Such tests are not always routine; for complex or unusual statistical designs, descriptions of such tests should be sufficiently detailed.