﻿ Nonparametric Analysis

# Nonparametric Analysis

Many statistical methods assume that data conform to the characteristics of a particular probability distribution (often the normal distribution), whose parameters (e,g, mean and standard deviation) are known. Hence, these are known as parametric methods.

Many of parametric tests are robust under violations of the distribution assumptions, particularly if sample sizes are sufficiently large. Under other circumstances the non-parametric class of methods is more powerful. The non-parametric methods make use of data types (e.g. nominal and ordinal data types) that contain less information than the types required for parametric tests. For a full discussion about this topic, see:

The following sections describe some of the more common non-parametric techniques.

Wilcoxon Signed Rank Test for Single Group Median
Test whether the median for a variable has a particular value. Unlike the one- sample t-test, it does not assume that the observations come from a Gaussian distribution.

Mann-Whitney U-test for Independent-Samples
Performs a 2-sample rank test (also called the Mann-Whitney U-test for Independent-Samples, or the two-sample Wilcoxon rank sum test) of the equality of two population medians, and calculates the corresponding point estimate and confidence interval

Wilcoxon Signed Rank Test for Paired-Samples
Test whether two paired sets of observations come from the same distribution. The alternative hypothesis is that the observations come from distributions with identical shape but different locations. Unlike the two-sampled t-test this test does not assume that the observations come from normal distributions.

Kruskal-Wallis Rank Sum Test for Independent-Samples
Tests the null hypothesis that multiple population distribution functions are identical against the alternative hypothesis that they differ by location.

One-sample Kolmogorov-Smirnov Test
Compares the distribution of a given sample to the hypothesized distribution defined by a specified cumulative distribution function.

Two-sample Kolmogorov-Smirnov Test
Performs a two-sample Kolmogorov-Smirnov test to compare the distributions of values in two data sets. For each potential value x, the Kolmogorov-Smirnov test compares the proportion of values in the first sample less than x with the proportion of values in the second sample less than x.

Paired-Samples Sign Test
Test the null hypothesis that the probability of a random value from the population of paired differences being above the specified value is equal to the probability of a random value being below the specified value.

1.Nonparametric test make less stringent demands of the data. For standard parametric procedures to be valid, certain underlying conditions or assumptions must be met, particularly for smaller sample sizes. The one-sample t test, for example, requires that the observations be drawn from a normally distributed population. For two independent samples, the t test has the additional requirement that the population standard deviations be equal. If these assumptions/conditions are violated, the resulting P-values and confidence intervals may not be trustworthy.

However, normality is not required for the Wilcoxon signed rank or rank sum tests to produce valid inferences about whether the median of a symmetric population is 0 or whether two samples are drawn from the same population.

2.Nonparametric procedures can sometimes be used to get a quick answer with little calculation. One of the simplest nonparametric procedures is the sign test.

The sign test can be used with paired data to test the hypothesis that differences are equally likely to be positive or negative. For small samples, an exact test of whether the proportion of positives is 0.5 can be obtained by using a binomial distribution. For large samples, the test statistic is (plus - minus)² / (plus + minus), where plus is the number of positive values and minus is the number of negative values. Under the null hypothesis that the positive and negative values are equally likely, the test statistic follows the chi-square distribution with 1 degree of freedom. Whether the sample size is small or large, the sign test provides a quick test of whether two paired treatments are equally effective simply by counting the number of times each treatment is better than the other.

3.Nonparametric methods provide an air of objectivity when there is no reliable (universally recognized) underlying scale for the original data and there is some concern that the results of standard parametric techniques would be criticized for their dependence on an artificial metric.

For example, patients might be asked whether they feel extremely uncomfortable / uncomfortable / neutral / comfortable / very comfortable. What scores should be assigned to the comfort categories and how do we know whether the outcome would change dramatically with a slight change in scoring? Some of these concerns are blunted when the data are converted to ranks.

4.A historical appeal of rank tests is that it was easy to construct tables of exact critical values, provided there were no ties in the data. The same critical value could be used for all data sets with the same number of observations because every data set is reduced to the ranks 1,...,n. However, this advantage has been eliminated by the ready availability of personal computers.

5.Sometimes the data do not constitute a random sample from a larger population. The data in hand are all there are. Standard parametric techniques based on sampling from larger populations are no longer appropriate. Because there are no larger populations, there are no population parameters to estimate. Nevertheless, certain kinds of nonparametric procedures can be applied to such data by using randomization models.

Such a strong case has been made for the benefits of nonparametric procedures that some might ask why parametric procedures aren't abandoned entirely in favor of nonparametric methods!

The major disadvantage of nonparametric techniques is contained in its name. Because the procedures are nonparametric, there are no parameters to describe and it becomes more difficult to make quantitative statements about the actual difference between populations. For example, when the sign test says two treatments are different, there's no confidence interval and the test doesn't say by how much the treatments differ.

The second disadvantage is that nonparametric procedures throw away information! The sign test, for example, uses only the signs of the observations. Ranks preserve information about the order of the data but discard the actual values. Because information is discarded, nonparametric procedures can never be as powerful (able to detect existing differences) as their parametric counterparts when parametric tests can be used.

How much information is lost? One answer is given by the asymptotic relative efficiency (ARE) which, loosely speaking, describes the ratio of sample sizes required (parametric to nonparametric) for a parametric procedure to have the same ability to reject a null hypothesis as the corresponding nonparametric procedure. When the underlying distributions are normal (with equal population standard deviations for the two-sample case)

 Procedure ARE sign test 2/p = 0.637 Wilcoxon signed-rank test 3/p = 0.955 Wilcoxon-Mann-Whitney U test 3/p = 0.955 Spearman correlation coefficient 0.91

Thus, if the data come from a normally distributed population, the usual z statistic requires only 637 observations to demonstrate a difference when the sign test requires 1000. Similarly, the t test requires only 955 to the Wilcoxon signed-rank test's 1000. It has been shown that the ARE of the Wilcoxon-Mann-Whitney test is always at least 0.864, regardless of the underlying population. Many say the AREs are so close to 1 for procedures based on ranks that they are the best reason yet for using nonparametric techniques!