Build the most powerful hypothesis test: Part 1

I tried my best not to come up with such a click-bait-y title for this post. However, Mr. Neyman (1894-1981) and Mr. Pearson (1895-1980) didn’t leave much room for that.

This is Part-1 of this two-part post. I have built an interactive Shiny app in R for visualizing what goes under the hood of these hypothesis tests. That can be found in Part-2.

First, let me define the “power” of a hypothesis test. In terms of statistics, Power is the probability of rejecting a null hypothesis when it is actually false. In layman terms, let us say we have a testing protocol which decides over two possible outcome: A and B. In this case, we have four possible scenario:

\mathrm{P_D} = \mathrm{Pr}[ \ decide \ for \ B \ | \ B \ \mathrm{true}]  … (1)

\mathrm{P_{FA}} = \mathrm{Pr}[ \ decide \ for \ B \ | \ A \ \mathrm{true}] … (2)

\mathrm{P_M} = \mathrm{Pr}[ \ decide \ for \ A \ | \ B \ \mathrm{true}] …(3)

If you have a close look, the last two scenarios represent error of two kinds. Historically, these are known as Type-1 and Type-2 error. However, I find these nomenclatures extremely non-intuitive. To make the entire topic a little more intuitive, let us use an example. Let us say, we are trying to detect the presence of a signal in a radar output in Gaussian noise. Outcome A indicates the absence of signal, which means the output contains noise only. Outcome B indicates the presence of signal in Gaussian noise. Clearly, Type-1 error means a false alarm whereas Type-2 error means a miss detection. In above equations, \mathrm{P_D}, \mathrm{P_{FA}} and \mathrm{P_M} denotes probability of correct detection, false alarm and miss detection respectively. There exists another combination obviously, which has not been named by statisticians. This is the nameless combo.

1-\mathrm{P_{FA}} = \mathrm{Pr}[ \ decide \ for \ A \ | \ A \ \mathrm{true}] … (4)

Also, the detection and miss detection probabilities are connected via

\mathrm{P_M} = 1 - \mathrm{P_D}

We have set our conceptual backdrop. Now we want to make this test the most powerful. This means: we want to come up with a decision rule that maximizes \mathrm{P_D}. Naturally, the first question that pops up in our mind: Is it possible to design a test where we can arbitrarily maximize the probability of correct decisions?

We need to answer this question first. Let us say, the decision rule divides the entire field (over which the likelihood functions are defined) into two disjoint region \mathbb{R}_A and \mathbb{R}_B. For a signal vetor \boldsymbol{x} we can write the following:

\mathrm{P_{FA}} = \int_{\mathbb{R}_B} p(\boldsymbol{x} \ \vert A) \ d\boldsymbol{x} … (5)

\mathrm{P_D} = \int_{\mathbb{R}_B} p(\boldsymbol{x} \ \vert B) \ d\boldsymbol{x} … (6)

Here p(\boldsymbol{x} \ \vert A) and p(\boldsymbol{x} \ \vert B) are the probability density function of \boldsymbol{x} under each hypothesis. An interesting phenomenon lies in Equation (5) and (6). That is- if \mathbb{R}_B is so big that it engulfs the entire decision region then both \mathrm{P_{FA}} and \mathrm{P_D} converge toward unity. This rather interesting phenomena tells us that, if the condional probability p(\boldsymbol{x} \ \vert A) and p(\boldsymbol{x} \ \vert B) overlap (which will yield a non-zero value in Equation (5)), then it is not possible to derive the ultimate decision rule for which \mathrm{P_D} = 1 and \mathrm{P_{FA}} = 0. If you want to increase \mathrm{P_D} , you will end up increasing \mathrm{P_{FA}} in the process, but in a different rate depending on the density functions and their extent of overlap. This is where Neyman-Pearson criterion comes to aid.

What is Neyman-Pearson Criterion?

The Neyman-Pearson criterion see that problem in a rather different light. It asks, how can we maximize \mathrm{P_D} for a given \mathrm{P_{FA}}. In short, it poses this problem into a constrained optimization problem as following

\max_{\mathbb{R}_B} \ \mathrm{P_D} \ \mathrm{Subject} \  \mathrm{to} \ \mathrm{P_{FA}} \leq \alpha  … (7)

It can be solved using Lagrange multiplier. I will take a rather different path by solving it graphically because that is more insighful. For that, let us consider a simple yet common case. Each likelyhood function under different hypothesis are Gaussian, differing only by mean but they share the same covariance matrix.

\mathrm{A} : \ \boldsymbol{x} \sim \ \mathcal{N}(\boldsymbol{0},\sigma^2\mathrm{\boldsymbol{I}})

\mathrm{B} : \ \boldsymbol{x} \sim \ \mathcal{N}(\boldsymbol{\mu},\sigma^2\mathrm{\boldsymbol{I}})

where \boldsymbol{x}\in \boldsymbol{R}^N, which means \boldsymbol{x} is a vector of length N. Immediately, the likelihood functions can be written as

p(\boldsymbol{x} \ \vert A) = \prod_{n=0}^{N-1}  \frac{1}{\sqrt{2\pi}\sigma} \exp\{-(\frac{x_n}{\sigma})^2\}

p(\boldsymbol{x} \ \vert B) = \prod_{n=0}^{N-1} \frac{1}{\sqrt{2\pi}\sigma} \exp\{-(\frac{x_n-\mu_n}{\sigma})^2\}

Taking the log-likelihood ratio of above likelihood functions and simplifying further (which is really straightforward), we end up with the following expression

\ln \frac{p(\boldsymbol{x} \ \vert B)}{p(\boldsymbol{x} \ \vert A)} = \ln \Upsilon(\boldsymbol{x}) = \frac{1}{\sigma^2}\sum_{n=0}^{N-1} \mu_n x_n - \frac{1}{2\sigma^2} \sum_{n=0}^{N-1} \mu_n^2

Now, the log-likelihood ratio test can be written as following by rearranging the above a little

Decide for A if \ln \Upsilon(\boldsymbol{x}) < \frac{1}{\sigma^2}\sum_{n=0}^{N-1} \mu_n x_n - \frac{1}{2\sigma^2} \sum_{n=0}^{N-1} \mu_n^2  … (8)

Decide for B if \ln \Upsilon(\boldsymbol{x}) > \frac{1}{\sigma^2}\sum_{n=0}^{N-1} \mu_n x_n - \frac{1}{2\sigma^2} \sum_{n=0}^{N-1} \mu_n^2  … (9)

In fact, the likelihood ratio test can be further simplified by a introducing the notion of “Sufficient Statistic”.

An extremely small primer to Sufficient Statistic:

Sufficient Statistic is a particular statistic or algebraic manipulation of our observation \boldsymbol{x}. Instead of dealing with the entire set of observed data points, sufficient statistic allows us to encode the observation in a simpler yet sufficient manner. This train of jargon is made simpler in the following example. We can define the test in a more compact manner by simply rearranging (8) and (9)

Decide for A if \sum_{n=0}^{N-1} \mu_n x_n < \sigma^2\ln \Upsilon(\boldsymbol{x}) + \frac{1}{2}\sum_{n=0}^{N-1}\mu_l^2

Decide for B if \sum_{n=0}^{N-1} \mu_n x_n > \sigma^2\ln \Upsilon(\boldsymbol{x}) + \frac{1}{2}\sum_{n=0}^{N-1}\mu_l^2

It is clear that, the term \sum_{n=0}^{N-1} \mu_n x_n, which is essentially the dot product of the observation vector and mean vector, alone can be used as a decision rule for this test. Hence, the term \sum_{n=0}^{N-1} \mu_n x_n is sufficient to decide the fate of the test. For the case of \mu_n = \mu \ \forall n, it becomes \mu\sum_{n=0}^{N-1} x_n. This means, in this case, the sufficient statistic is simply the sum of all elements in \boldsymbol{x} scaled by \mu. Now, using sufficient statistic, the exactly same test can be rewritten as

Decide for A if \Delta(\boldsymbol{x}) = \sum_{n=0}^{N-1} x_n < \frac{\sigma^2}{\mu}\ln \Upsilon(\boldsymbol{x}) + \frac{N\mu}{2}

Decide for B if \Delta(\boldsymbol{x}) = \sum_{n=0}^{N-1} x_n > \frac{\sigma^2}{\mu}\ln \Upsilon(\boldsymbol{x}) + \frac{N\mu}{2}

For this example, the sufficient statistic \Delta(\boldsymbol{x}) is again a Gaussian random variable under each hypothesis. This time, it is not a vector, rather a scalar.

\mathrm{A}: \Delta(\boldsymbol{x}) \sim \mathcal{N}(0,N\sigma^2)

\mathrm{B}: \Delta(\boldsymbol{x}) \sim \mathcal{N}(N\mu,N\sigma^2)

In the next part, we will be looking for the optimum decision rule and develop the tools to evaluate the performance of our tests.

(The comic in the featured image is stolen from the amazing xkcd)

One thought on “Build the most powerful hypothesis test: Part 1

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s