Generalized Neyman-Pearson Lemma

We will review the generalized Neyman-Pearson lemma, see [CK01].

Let {(\Omega, \mathcal{F}, \mu)} be a probability space. {\mathcal{G}, \mathcal{H} \subset L^1(\mathcal{\mu})} are two subspaces of {\mu}-integrable random variables. Let {\chi} be defined by

\displaystyle \chi = \{X: \Omega/\mathcal{F} \rightarrow [0,1]/\mathcal{B}([0,1])\}


\displaystyle \chi_x = \{X\in \chi: \mathbb{E}[H X] \le x, \ \forall H \in \mathcal{H}\}.

The problem is to find

\displaystyle V(x) = \sup_{X\in \chi_x} \inf_{G\in \mathcal{G}} \mathbb{E} [GX].

In other words, it is to maximize

\displaystyle \gamma(x) := \inf_{G\in \mathcal{G}} \mathbb{E}[ GX]

over the constraints

\displaystyle s(x) = \sup_{H\in \mathcal{H}} \mathbb{E} [HX]\le x, \ \forall H\in \mathcal{H}.

Assumption 1 {\mathcal{G}\subset L^1(\mu)} is convex and closed under {\mu}-a.e.

The following theorem is a straightforward generalization of the result of [CK01].

Theorem 1 Suppose Assumption 1holds. Then, there exists {(\hat G, \hat H, \hat z, \hat X) \in \mathcal{G} \times \overline{Co(\mathcal{H})} \times (0,\infty) \times \chi_x}, such that

  1. For all {(x, G) \in \chi_x \times \mathcal{G}}, following holds:

    \displaystyle \mathbb{E} [\hat G X] \le \mathbb{E} [\hat G \hat X] \le \mathbb{E} [ G \hat X].

    This implies the exitence of the saddle point, i.e.

    \displaystyle V(x) = \mathbb{E} \hat G \hat X] = \sup_{X\in \chi_x} \inf_{G\in \mathcal{G}} \mathbb{E} [GX] = \inf_{G\in \mathcal{G}} \sup_{X\in \chi_x} \mathbb{E} [GX].

  2. Let {\tilde V(z) = \inf_{G\times \overline{Co(\mathcal{H})}} \mathbb{E} [(G- zH)^+]}. Then,

    \displaystyle V(z) = \inf_{z>0} \{xz + \tilde V(z)\}.

    Moreover, {\hat z} is given by

    \displaystyle \hat z = \arg\min_{z>0} \{xz + \tilde V(z)\}.

  3. For all {z\ge 0}, there exists {(\hat G_z, \hat H_z) \in G\times \overline{Co(\mathcal{H})}} such that

    \displaystyle \tilde V(z) = \mathbb{E} [(\hat G_z- z \hat H_z)^+].

    {(\hat G, \hat H)} can be taken by {\hat G = \hat G_{\hat z}} and {\hat H = \hat H_{\hat z}}.

  4. {\hat X} can take the form of

    \displaystyle \hat X = I_{\{\hat G > \hat z \hat H\}} + B I_{\{\hat z \hat H = \hat G\}}, \ \ \ \ \ (1)

    where {B\in L^0(\mathcal{F})} is chosen to satisfy

    \displaystyle \mathbb{E}[\hat H \hat X] = x. \ \ \ \ \ (2)

  5. In such a way, we have

    \displaystyle \mathbb{E}[\hat G \hat X] = \mathbb{E}[(\hat G - \hat z \hat H)^+ ] + \hat z x, \ \mu-a.e. \ \ \ \ \ (3)

    and {\sup_{\mathcal{H}} \mathbb{E} [ H \hat X] = x}.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s