Skip to content

Statistics

Resources

Blogs

Parametric methods

One-sided t-test

  • Difference between a sample mean and a (known) population mean).
  • Difference between two paired samples (the population mean is 0 in this case).

Two-sided t-test

  • Compare the difference between two sample means.
  • i.e. test whether the two samples came from the same population.

Non-parametric methods

Fisher's exact test

Calculates exact probability that two groups are different based on a contingency table.

Sign test

Wilcoxon signed rank test

  • Non-parametric equivalent of a one-sided t-test.
  • Has more power than the sign test.

Wilcoxon rank sum test

Kruskal-Wallis test

  • Non-parametric equivalent of ANOVA.

Correlations

Pearson correlation (R)

  • Measures linear correlation between two variables.
  • Square root of the R2 measure of the goodness of fit.

Spearman's correlation (ρ)

  • Extension of Pearson's R to ordinal values.
  • Captures the amount of variance that is explained by the other parameter.

Kendall rank correlation (τ)

  • Correlation between two ordinal values.
  • Based on the difference between the number of concordant and discordant pairs.
  • Should be preferred over Spearman's ρ.

Other tests

Dunnett test

Dunnett's test for comparing several treatments with a control.

Bayesian methods

Distributions

Information retrieval

  • Kolmogorov–Smirnov test — Nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test).
  • Kullback–Leibler divergence — Measure of simmilarity between two probability distributions.
  • Jensen–Shannon divergence — Symmetric version of the Kullback–Leibler divergence.

Information retrieval

Clustering

Enrichment

Gene set enrichment analysis

Gibbs Free Energy