In
statistics, a
likelihood ratio test is a
statistical test used to compare the
goodness of fit of two models, one of which (the
null model) is a special case of the other (the
alternative model). The test is based on the
likelihood ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its
logarithm, can then be used to compute a
p-value, or compared to a critical value to decide whether to reject the null model in favour of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a
log-likelihood ratio statistic, and the
probability distribution of this test statistic, assuming that the null model is true, can be approximated using
Wilks's theorem.