FDR and Benjamini-Hochberg

We wish to test {N} null hypotheses {H_{01}, \dots, H_{0N}} indexed by the set {\{1, \dots, N\}}. The hypotheses indexed by {I_0 \subseteq \{1, \dots, N\}} are truly null with {|I_0| = N_0} and the remaining hypotheses are non-null. A test in this setting looks at the data and decides to accept or reject each {H_{0i}}. While devising such a test, one obviously wants to guard against rejecting too many true null hypotheses. A classical way of ensuring this is to allow only those tests whose Family Wise Error Rate (FWER) is controlled at a predetermined small level. The FWER is defined as

\displaystyle FWER := \mathop{\mathbb P} \left(\cup_{i \in I_0} \left\{\text{Reject } H_{0i} \right\} \right). \ \ \ \ \ (1)

It is very easy to design a test whose FWER is controlled by a predetermined level {\alpha}: reject or accept each hypothesis {H_{0i}} according to a test whose type I error is atmost {\alpha/N}. By the union bound, one then has

\displaystyle FWER = \mathop{\mathbb P} \left(\cup_{i \in I_0} \left\{\text{Reject } H_{0i} \right\} \right) \leq \sum_{i \in I_0} \mathop{\mathbb P} \left\{\text{Reject } H_{0i} \right\} \leq \frac{\alpha N_0}{N} \leq \alpha.

The above procedure is sometimes called the Bonferroni method. In modern theory of hypothesis testing, control of the FWER is considered too stringent mainly because it leads to tests that fail to reject many non-null hypotheses as well. The modern method is to insist on control of FDR (False Discovery Rate) as opposed to FWER. The FDR of a test is defined as

\displaystyle FDR = \mathop{\mathbb E} \left( \frac{V}{R \vee 1} \right)


\displaystyle R := \sum_{i=1}^N I \left\{\text{Reject } H_{0i} \right\} \text{ and } V := \sum_{i \in I_0} I \left\{\text{Reject } H_{0i} \right\}

and {R \vee 1 := \max(R, 1)}. The quantity {V/(R \vee 1)} is often called the FDP (False Discovery Proportion). FDR is therefore the expectation of FDP.

How does one design a test whose FDR is controlled at a predetermined level {\alpha} (e.g., {\alpha = 0.1}) and which rejects more often that the Bonferroni procedure? This was answered by Benjamini and Hochberg in a famous paper in 1995. Their procedure is described below. For each hypothesis {H_{0i}}, obtain a {p}-value {p_i}. For {i \in I_0}, the {p}-value {p_i} has the uniform distribution on {[0, 1]}. For {i \notin I_0}, the {p}-value {p_i} has some other distribution probably more concentrated near 0. Let the ordered {p}-values be {p_{(1)} < \dots < p_{(N)}}. The BH procedure is the following:

\displaystyle \text{Reject } H_{0i} \text{ if and only if } p_{i} \leq \frac{i_{\max} \alpha}{N}


\displaystyle i_{\max} := \max \left\{1 \leq i \leq N : p_{(i)} \leq \frac{i \alpha}{N} \right\}.

In the event that {p_{(i)} > i \alpha/N} for all {i}, we take {i_{\max} = 0}. The BH procedure is probably easier to understand via the following sequential description. Start with {i = N} and keep accepting the hypothesis corresponding to {p_{(i)}} as long as {p_{(i)} > \alpha i/N}. As soon as {p_{(i)} \leq i \alpha/N}, stop and reject all the hypotheses corresponding to {p_{(j)}} for {j \leq i}.

It should be clear that the BH procedure rejects hypotheses much more liberally compared to the Bonferroni method (which rejects when {p_i \leq \alpha/N}). Indeed any hypothesis rejected by the Bonferroni method will also be rejected by the BH procedure. The famous Benjamini-Hochberg theorem states that the FDR of the BH procedure is exactly equal to {N_0 \alpha/N} under the assumption that the {p}-values {p_1, \dots, p_N} are independent:

Theorem 1 (Benjamini-Hochberg) The FDR of the BH procedure is exactly equal to {N_0 \alpha/N} under the assumption that the {p}-values {p_1, \dots, p_N} are independent.

There probably exist many proofs of this by-now classical inequality. Based on an understated google search, I was able to find two extremely slick and short proofs which I describe below. Prior to that, let me provide some rudimentary intution for the specific form of the BH procedure. Based on the {p}-values {p_1,\dots, p_N}, our goal is to reject or accept each hypothesis {H_{0i}}. It is obvious that we will have to reject those for which {p_i} is small but how small is the question. Suppose we decide to reject all hypotheses for which the {p}-value is less than or equal to {t}. For this procedure, the number of rejections and the number of false rejections are given by

\displaystyle R_t := \sum_{i=1}^N I \{p_i \leq t\} ~~ \text{ and } ~~ V_t := \sum_{i \in I_0} I \{p_i \leq t\} \ \ \ \ \ (2)

respectively. Consequently the FDR of this procedure is {FDR_t := \mathop{\mathbb E} V_t/(R_t \vee 1)}. We would ideally like to choose {t} to be the largest subject to the constraint that {FDR_t \leq \alpha} (largest because larger values of {t} lead to more rejections or discoveries). Unfortunately, we do not quite know what {\mathop{\mathbb E} V_t/(R_t \vee 1)} is; we do not even know what {V_t/(R_t \vee 1)} is (if we did we could have used it as a proxy for {FDR_t}). We do however know what {R_t} is but {V_t} requires knowledge of {I_0} which we do not have. However, the expectation of {V_t} equals {N_0 t} which we know cannot be larger than the known quantity {Nt}. It is therefore reasonable to choose {t} as

\displaystyle \tau := \sup \left\{t \in [0, 1] : \frac{Nt}{R_t \vee 1} \leq \alpha \right\} \ \ \ \ \ (3)

and reject all {p}-values which are less than or equal to {\tau}. This intuitive procedure is actually exactly the same as the BH procedure and this fact is not very hard to see.

Proof One: This proof uses martingales and is due to Storey, Taylor and Siegmund in a paper published in 2004. The explanation above about an alternative formulation of the BH procedure implies that we only need to prove

\displaystyle \mathop{\mathbb E} \frac{V_{\tau}}{R_{\tau} \vee 1} = \frac{N_0 \alpha}{N}. \ \ \ \ \ (4)

where {V_t} and {R_t} are defined as in (2). The important observation now is that the process {\{V_t/t : 0 \leq t \leq 1\}} is a backward martingale i.e.,

\displaystyle \mathop{\mathbb E} \left( \frac{V_s}{s} \bigg| \frac{V_{t'}}{t'} , t' \geq t \right) = \frac{V_t}{t}

for all {0 \leq s < t \leq 1}. This fact involves only independent uniform random variables and is easy. With {\tau} defined as in (3), one of Doob’s martingale theorems gives

\displaystyle \mathop{\mathbb E} \left(\frac{V_{\tau}}{\tau} \right) = \mathop{\mathbb E} \left(\frac{V_1}{1} \right) = N_0.

Now the definition (3) of {\tau} implies that {N \tau/(R_{\tau} \vee 1) = \alpha} (this requires an argument!). As a result, we can replace {\tau} by {\alpha (R_{\tau} \vee 1)/N} to obtain (4). This completes the proof.

Proof Two: This proof works directly with the original formulation of the BH procedure. I have found this proof in a recent paper by Heesen and Janssen (see page 25 in arxiv:1410.8290). We may assume that {I_0} is nonempty for otherwise {V \equiv 0} and there will be nothing to prove. Let {p := (p_1, \dots, p_N)} and let {R(p)} denote the number of rejections made by the BH procedure. From the description, it should be clear that {R(p)} is exactly equal to {i_{\max}}. We can therefore write the FDP as

\displaystyle FDP = \frac{V}{R(p) \vee 1} = \sum_{j \in I_0} \frac{I \left\{p_j \leq \alpha R(p)/N \right\} }{R(p) \vee 1}.

We now fix {j \in I_0} and let {\tilde{p} := (p_1, \dots, p_{j-1}, 0, p_{j+1}, \dots, p_n)} i.e., the {j}th {p}-value is replaced by {0} and the rest of the {p}-values are unchanged. Let {R(\tilde{p})} denote the number of rejections of the BH procedure for {\tilde{p}}. It should be noted that {R(\tilde{p}) \geq 1} because of the presence of a zero {p}-value in {\tilde{p}}. The key observation now is

\displaystyle \frac{I \left\{p_j \leq \alpha R(p)/N \right\} }{R(p) \vee 1} = \frac{I \left\{p_j \leq \alpha R(\tilde{p})/N \right\} }{R(\tilde{p})} \ \ \ \ \ (5)

To see this, it is enough to note that {I \left\{p_j \leq \alpha R(p)/N \right\} = I \left\{p_j \leq \alpha R(\tilde{p})/N \right\}} and that {R(p) = R(\tilde{p})} when {p_j \leq \alpha R(p)/N}. It is straightforward to verify these facts from the definition of the BH procedure. Using (5), we can write

\displaystyle FDR = \sum_{j \in I_0} \mathop{\mathbb E} \frac{I \left\{p_j \leq \alpha R(p)/N \right\} }{R(p) \vee 1} = \sum_{j \in I_0} \mathop{\mathbb E} \frac{I \left\{p_j \leq \alpha R(\tilde{p})/N \right\} }{R(\tilde{p})}

The independence assumption of {p_1, \dots, p_N} now implies that {p_j} and {R(\tilde{p})} are independent. Also because {p_j} is uniformly distributed on {[0, 1]} as {j \in I_0}, we deduce that {FDR = \alpha N_0/N} and this completes the proof.

Upper bounds for mutual information

Consider a pair of jointly distributed random variables {\Theta} and {X}. We think of {\Theta} as the unknown parameter and {X} as the data from which we have to draw inference about {\Theta}. Assume that the marginal distribution of {\Theta} is uniform over a finite set {F} of size {N} i.e., {\mathop{\mathbb P} \{\Theta = \theta\} = 1/N} for each {\theta \in F}.

The mutual information between {\Theta} and {X}, denoted by {I(\Theta, X)}, is defined as the Kullback-Leibler divergence (or relative entropy) between the joint distribution of {\Theta} and {X} and the product of their marginal distributions. Recall that the Kullback-Leibler divergence between two probability measures {P} and {Q} is given by {D(P||Q) := \int p \log(p/q) d\mu} where {p} and {q} denote densities of {P} and {Q} with respect to a common measure {\mu}.

A clean expression for {I(\Theta, X)} can be written down in terms of the conditional distributions, {P_{\theta}}, of {X} conditional on {\Theta = \theta}:

\displaystyle I(\Theta, X) = \frac{1}{N} \sum_{\theta \in F} D(P_{\theta}||\bar{P})

where {\bar{P} := \frac{1}{N} \sum_{\theta \in F} P_{\theta}}.

Mutual information is fundamental to information theory. It also appears in many places in statistics. For example, it provides, via Fano’s inequality, a lower bound on how well {\Theta} can be estimated on the basis of {X}:

\displaystyle \inf_{\hat{\Theta}(X)} \mathop{\mathbb P} \left\{\Theta \neq \hat{\Theta}(X) \right\} \geq 1 - \frac{I(\Theta; X) + \log 2}{\log N}. \ \ \ \ \ (1)

Here {\hat{\Theta}(X)} is any estimator for {\Theta} based on {X} i.e., any function of {X} that takes values in {F}.

Determining {I(\Theta, X)} exactly is usually intractable however and one typically works with appropriate bounds on {I(\Theta, X)}. This post is about upper bounds on {I(\Theta, X)}. Note that an upper bound on {I(\Theta, X)} provides a lower bound for the testing error in (1).

Most bounds for {I(\Theta; X)} are based on the identity

\displaystyle I(\Theta, X) = \frac{1}{N} \inf_Q \sum_{\theta \in F} D(P_{\theta} || Q) \ \ \ \ \ (2)

which is a consequence of the fact that {\sum_{\theta \in F} D(P_{\theta} || Q) = \sum_{\theta \in F} D(P_{\theta} || \bar{P}) + N D(\bar{P} || Q) } for every {Q}. Different choices of {Q} in (2) give different upper bounds on {I(\Theta, X)}. One gets, for example,

\displaystyle I(\Theta, X) \leq \min_{\theta' \in F} \frac{1}{N} \sum_{\theta \in F} D(P_{\theta} || P_{\theta'}) \leq \frac{1}{N^2} \sum_{\theta, \theta' \in F} D(P_{\theta} || P_{\theta'}) \leq \max_{\theta, \theta' \in F} D(P_{\theta} || P_{\theta'}) . \ \ \ \ \ (3)

These bounds are very frequently used in conjunction with Fano’s inequality (1). The last bound {\max_{\theta, \theta' \in F} D(P_{\theta} || P_{\theta'})} is called the Kullback-Leibler diameter of {\{P_{\theta} : \theta \in F\}}.

I will try to argue below that the bounds in (3) are, in general, quite inaccurate and describe some improved bounds due to Yang and Barron [1] and Haussler and Opper [2]. 

To demonstrate that these are bad bounds, consider the following alternate, equally crappy, bound for {I(\Theta, X)}:

\displaystyle I(\Theta, X) \leq \log N. \ \ \ \ \ (4)

To see why this is true, just note that {D(P_{\theta} || \bar{P}) \leq \log N} for each {\theta \in F} which is a consequence of the fact that the density of {P_{\theta}} with respect to {\bar{P}} is trivially bounded from above by {N}.

The bounds (3) and (4) are complementary; (3) only involves the pairwise Kullback-Leibler divergences {D(P_{\theta}||P_{\theta'})} for {\theta, \theta' \in F} while the bound (4) only considers the cardinality of {F}. To see why (3) can be inaccurate, just consider the simple case when {F = \{\theta, \theta'\}} with {D(P_{\theta}||P_{\theta'})} much larger than {\log 2}. On the other hand, (4) can be a crappy bound when {\{P_{\theta}: \theta \in F\}} is a set with large cardinality and small Kullback-Leibler diameter.

Yang and Barron [1] proved the following upper bound for {I(\Theta, X)} that is a simultaneous improvement of both (3) and (4). For every finite set of probability measures {\{Q_{\alpha}, \alpha \in G\}} with {|G| = M}, we have

\displaystyle I(\Theta, X) \leq \log M + \frac{1}{N} \sum_{\theta \in F} \min_{\alpha \in G} D(P_{\theta} || Q_{\alpha}). \ \ \ \ \ (5)

To see why (5) is an improvement of (3), just fix {\theta' \in F} and take {\{Q_{\alpha}, \alpha \in G\}} to be the singleton probability measure {\{P_{\theta'}\}} so that {M = 1}. Then write down the bound given by (5) and then take infimum over {\theta' \in F}. To see why (5) is an improvement of (4), just take {\{Q_{\alpha}, \alpha \in G\}} to be identical to the class {\{P_{\theta}, \theta \in F\}}.

Yang and Barron [1] crucially used (5) in conjunction with Fano’s inequality to give simple proofs of minimax lower bounds in many classical nonparametric estimation problems including density estimation and nonparametric regression. Their arguments actually apply to any situation where one has accurate upper and lower bounds on the global metric entropy numbers of the parameter space. They also point out that the weaker bounds in (3) cannot be used to yield optimal minimax lower bounds that depend on global metric entropy properties alone. I will try to explain these in another post.

Inequality (5) has an almost trivial proof. Let {p_{\theta}} denote the density of {P_{\theta}} and {q_{\alpha}} denote the density of {Q_{\alpha}}, all with respect to a common dominating measure {\mu}. Let {\bar{Q} := \sum_{\alpha \in G} Q_{\alpha}/M} and note, from (4), that {I(\Theta, X) \leq \sum_{\theta \in F} D(P_{\theta} || \bar{Q})/N}. Observe now that the ratio of the densities of {P_{\theta}} and {\bar{Q}} is bounded from above by {M \min_{\alpha \in G} p_{\theta}/q_{\alpha}} which should prove (5).

It turns out that (5) is a consequence of the following bound for {I(\Theta, X)} due to Haussler and Opper [2]:

\displaystyle I(\Theta, X) \leq - \frac{1}{N} \sum_{\theta \in F} \log \left(\frac{1}{M} \sum_{\alpha \in G} \exp \left(- D(P_{\theta}||Q_{\alpha}) \right) \right). \ \ \ \ \ (6)

Indeed, one obtains (5) from (6) by replacing the inner sum in the right hand side above by the smallest of the terms. Inequality (6) is proved by a very clever application of Jensen’s inequality. First observe that because {I(\Theta, X) \leq \sum_{\theta \in F} D(P_{\theta} || \bar{Q})}, it is enough to show that

\displaystyle D(P_{\theta} || \bar{Q}) \leq -\log \left(\frac{1}{M} \sum_{\alpha \in G} \exp \left(- D(P_{\theta}||Q_{\alpha}) \right) \right)

for every {\theta \in F}. Write

\displaystyle D(P_{\theta} || \bar{Q}) = - \int p_{\theta} \log \left( \frac{1}{M} \sum_{\alpha \in G} \frac{q_{\alpha}}{p_{\theta}} \right) = - \int p_{\theta} \log \left( \frac{1}{M} \sum_{\alpha \in G} \exp \left(\log \frac{q_{\alpha}}{p_{\theta}} \right) \right).

Consider now the mapping

\displaystyle F : (u_{\alpha}, \alpha \in G) \mapsto -\log \left(\sum_{\alpha \in G} e^{u_{\alpha}}/M \right).

so that

\displaystyle D(P_{\theta} || \bar{Q}) = \int p_{\theta} F \left(\log \frac{q_{\alpha}}{p_{\theta}}, \alpha \in G \right).

The crucial point here is that this mapping is concave. Inequality (6) then follows by Jensen’s inequality. To see why {F} is concave, use Holder’s inequality:

\displaystyle \frac{1}{M} \sum_{\alpha \in G} \left(e^{u_{\alpha}}\right)^{1-\gamma} \left(e^{v_{\alpha}}\right)^{\gamma} \leq \left(\frac{1}{M} \sum_{\alpha \in G} e^{u_{\alpha}} \right)^{1-\gamma} \left(\frac{1}{M} \sum_{\alpha \in G} e^{v_{\alpha}} \right)^{\gamma}

and then take logs on both sides. Haussler and Opper [2] also proved a very nice lower bound for {I(\Theta, X)} that looks very similar to the upper bound in (6); that might be the topic of a future post.


[1] D. Haussler and M. Opper. Mutual information, metric entropy and cumulative relative entropy risk. Annals of Statistics, 25: 2451-2492, 1997.

[2] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Annals of Statistics, 27: 1564-1599, 1999. 

Le Cam’s Hypothesis Testing Inequality

My first post is about a beautiful hypothesis testing inequality due to Le Cam [1, Chapter 16, Section 4]. Here is the setting. There is an unknown distribution {P} on which we have two hypotheses. The first hypothesis is {H_0 : P \in \mathcal{P}_0} and the second is {H_1: P \in \mathcal{P}_1} where {\mathcal{P}_0} and {\mathcal{P}_1} denote classes of probability measures. Given i.i.d data {X_1, \dots, X_n} from {P}, the goal is to figure out which of the two hypothesis is the right one.

As one gets more and more data points i.e., as {n} increases, this problem intuitively should get easier. Le Cam’s inequality quantifies this by asserting that the error in this hypothesis testing problem decreases exponentially in {n}. The precise statement and a proof sketch of this inequality is given below. This inequality has many applications; a particularly interesting one is its role in establishing frequentist convergence rates of posterior distributions; this will be outlined in a future post.

We first need to specify the notion of error in hypothesis testing. Given a test {\phi} (which is simply a {[0, 1]}-valued function of the data), its type I error is {\sup_{P \in \mathcal{P}_0} \mathop{\mathbb E}_P \phi(X_1, \dots, X_n)} while its type II error is {\sup_{P \in \mathcal{P}_1} \mathop{\mathbb E}_P \left( 1 - \phi(X_1, \dots, X_n)\right) }. For ease of notation, let me write {P^n \phi} for {\mathop{\mathbb E}_P\phi(X_1, \dots, X_n)} where {P^n} is the {n}-fold product of {P} with itself. We define the error of a test {\phi} to be the sum of its type I and type II errors. The smallest error in testing {H_0} against {H_1} is therefore

\displaystyle E_n := \inf_{\phi} \sup_{P_0 \in \mathcal{P}_0, P_1 \in \mathcal{P}_1} P_0^n \phi + P_1^n (1 - \phi).

Le Cam’s inequality states that {E_n} decreases exponentially in {n}. The rate of this decrease depends on how close the classes {\mathcal{P}_0} and {\mathcal{P}_1} are. This closeness is measured by the so-called Hellinger affinity between {\mathcal{P}_0} and {\mathcal{P}_1} defined as

\displaystyle \rho(\mathcal{P}_0, \mathcal{P}_1) := \sup_{P_0 \in \mathcal{P}_0, P_1 \in \mathcal{P}_1} \int \sqrt{p_0 p_1} d\mu

where {p_0} and {p_1} denote the densities of {P_0} and {P_1} with respect to a common dominating measure {\mu}. {\int \sqrt{p_0 p_1} = 1 - \int (\sqrt{p_0} - \sqrt{p_1})^2/2} measures closeness between {P_0} and {P_1} with large values indicating closeness. Hence the name affinity.

The precise statement of Le Cam’s inequality is

\displaystyle E^{1/n}_n \leq \rho \left(co(\mathcal{P}_0), co(\mathcal{P}_1) \right) \ \ \ \ \ (1)

where {co(\mathcal{P}_0)} denotes the convex hull of {\mathcal{P}_0} i.e., the class of all finite linear combinations of probability measures in {\mathcal{P}_0}. As a consequence, we have that {E_n} decreases exponentially in {n} as long as {\rho(co(\mathcal{P}_0), co(\mathcal{P}_1))} is bounded from above by a constant strictly smaller than one. It must be noted that (1) is a finite sample result i.e., it is allowed that {\mathcal{P}_0} and {\mathcal{P}_1} depend on {n}It might seem strange that (1) involves the convex hulls of {\mathcal{P}_0} and {\mathcal{P}_1} instead of {\mathcal{P}_0} and {\mathcal{P}_1} themselves. This can be explained perhaps by the simple observation: {\sup_{P \in \mathcal{P}} P \phi = \sup_{P \in co(\mathcal{P})} P \phi}.

Let me now provide a sketch of the proof of (1). Let {\mathcal{P}_0^n := \{P^n : P \in \mathcal{P}_0\}} and similarly define {\mathcal{P}_1^n}. Rewrite {E_n} as

\displaystyle E_n = \inf_{\phi} \sup_{P_0 \in \mathcal{P}^n_0, P_1 \in \mathcal{P}_1^n} \left(P_0 \phi + P_1 (1 - \phi)\right).

The first idea is to interchange the inf and sup above. This is formally done via a minimax theorem. See this note by David Pollard for a clear exposition of a minimax theorem and its application to this particular example. This gives

\displaystyle E_n = \sup_{P_0 \in co(\mathcal{P}^n_0), P_1 \in co(\mathcal{P}^n_1)} \inf_{\phi}\left(P_0 \phi + P_1 (1 - \phi)\right).

Minimax theorems require the domains (on which the inf and sup are taken) to be convex which is why one gets convex hulls above. The advantage of using the minimax theorem is that the infimum above can be easily evaluated explicitly. This is because

\displaystyle P_0 \phi + P_1 (1 - \phi) = \int \left(p_0 \phi + p_1 (1 - \phi) \right) d\mu \geq \int \min(p_0, p_1) d\mu.

We thus get

\displaystyle E_n = \sup_{P_0 \in co(\mathcal{P}_0^n), P_1 \in co(\mathcal{P}_1^n)} \int \min(p_0, p_1) d\mu.

The quantity {\int \min(p_0, p_1) d\mu} also measures closeness between {P_0} and {P_1}. It is actually called the total variation affinity between {P_0} and {P_1}. The trouble with it is that it is difficult to see how the right hand side above changes with {n}. This is the reason why one switches to Hellinger affinity. The elementary inequality {\min(a, b) \leq \sqrt{ab}} gives

\displaystyle E_n \leq \sup_{P_0 \in co(\mathcal{P}_0^n), P_1 \in co(\mathcal{P}_1^n)} \int \sqrt{p_0 p_1} d\mu = \rho \left( co(\mathcal{P}_0^n), co(\mathcal{P}_1^n) \right).

The proof is now completed by showing that

\displaystyle \rho \left( co(\mathcal{P}_0^n), co(\mathcal{P}_1^n) \right) \leq \rho \left(co(\mathcal{P}_0), co(\mathcal{P}_1) \right)^n. \ \ \ \ \ (2)

This inequality is a crucial part of the proof. This is the reason why one works with Hellinger affinity as opposed to the total variation affinity although the latter is more directly related to {E_n}. Let us see why (2) holds when {n = 2}. The general case can be tackled similarly.

Any probability measure in {co(\mathcal{P}_0^2)} is of the form {A = \sum_i \alpha_i P^2_{i0}} for some {P_{i0} \in \mathcal{P}_0}. Similarly, any probability measure in {co(\mathcal{P}_1^2)} is of the form {B = \sum_j \alpha_j P^2_{j1}} for some {P_{j1} \in \mathcal{P}_1}. The Hellinger affinity between {A} and {B} is therefore

\displaystyle \int \int \sqrt{\sum_{i} \alpha_i p_{i0}(x) p_{i0}(y)} \sqrt{\sum_j \beta_j p_{j1}(x) p_{j2}(y)} d\mu(x) d\mu(y).

Now if {m_0(x) := \sum_{i} \alpha_i p_{i0}(x)} and {m_1(x) := \sum_j \beta_j p_{j1}(x)}, we can rewrite the above as

\displaystyle \int \sqrt{m_0(x)} \sqrt{m_1(x)} \left[ \int \sqrt{\frac{\sum_{i} \alpha_i p_{i0}(x) p_{i0}(y)}{\sum_i \alpha_i p_{i0}(x)}} \sqrt{\frac{\sum_{j} \beta_j p_{j1}(x) p_{j1}(y)}{\sum_j \beta_j p_{j1}(x)}} d\mu(y)\right] d\mu(x).

Fixing {x} and looking at the inner integral above, the densities (as functions of {y}) involved are all convex combinations of densities in {\mathcal{P}_0} and {\mathcal{P}_1} respectively. Therefore the inner integral above is at most {\rho(co(\mathcal{P}_0), co(\mathcal{P}_1))} which means that the above quantity is atmost

\displaystyle \rho(co(\mathcal{P}_0), co(\mathcal{P}_1)) \int \sqrt{m_0(x) m_1(x)} d\mu(x).

By the same logic, {\int \sqrt{m_0 m_1} \leq \rho(co(\mathcal{P}_0), co(\mathcal{P}_1))}. This proves (2) for {n = 2}. The general case is not much more difficult. This completes the sketch of the proof of (1).


[1] L. Le Cam. Asymptotic Methods in Statistical Decision Theory. Springer-Verlag, New York, 1986.