Three related optimization problems

We consider three optimzation problems. First one is optimal stopping problem, second is linear programming motivated from first problem, and the last is the dual problem of the second one. We expect the value functions of all three problems are equal to each other under certain assumptions. Given filtered probability space {(\Omega, \mathcal{F}, \mathbb{P}, \mathcal{F}_t)}, let {(X_t)_{t\ge 0}} be an 1-D Markov diffusion process with a generator {\mathcal{A}}, defined by

\displaystyle \mathcal{A} \varphi = b \varphi' + \frac 1 2 \sigma^2 \varphi'', \forall \varphi \in \mathcal{D}.

In the above {\mathcal{D} = C_0^2(\mathbb{R})} is the space of test functions. Let {\mathcal{M}_1} be a collection of countably additive measures on {\mathcal{B}(\mathbb{R})} bounded by 1. Also, we denote

\displaystyle \langle \varphi, \nu \rangle = \int_\mathbb{R} \varphi(x) \nu (dx), \quad \forall (\varphi, \nu) \in \mathcal{D}\times \mathcal{M}_1.

Define adjoint operator {\mathcal{A}^*: \mathcal{M}_1 \mapsto \mathcal{M}_1} by

\displaystyle \langle \mathcal{A} \varphi, \mu \rangle = \langle \varphi, \mathcal{A}^* \mu \rangle, \ \forall \varphi \in \mathcal{D}.

{\mathcal{A}^*} is well defined in the above way, since

  • The linear functional {L} defined on {\mathcal{D}} by

    \displaystyle L \varphi = \langle \varphi, \mu\rangle

    can be extended to larger space {C_0} consistently by Han-Banach Theorem using sup norm. (We shall find a sepcific extension?)

  • Riesz Representation Theorem on {C_0} implies there exists unique measure {\nu \in \mathcal{M}}, such that

    \displaystyle L \varphi = \langle \varphi, \nu \rangle, \ \forall \varphi \in C_0.

Given a function {g\in C_b (\mathbb{R})} and a constant {\alpha>0}, we consider optimial stopping (OS) problem:

\displaystyle V(x) = \sup_{\tau\in \mathcal{T}} \mathbb{E}_x [ e^{-\alpha \tau} g(X_\tau)] \hspace{1in} (OS).

Here, we use {\mathcal{T}} to denote the collection of all {\mathcal{F}_t}-stopping times {\tau} satisfying {\tau<\infty} almost surely, and {\mathbb{E}_x[\ \cdot \ ]} means {\mathbb{E}[\ \cdot \ | X_0 = x]}. One can use Ito’s formula to obtain, for all

\displaystyle e^{-\alpha \tau} \varphi(X_\tau) = \varphi(x) + \int_0^\tau e^{-\alpha s} (\mathcal{A} - \alpha) \varphi (X_s) ds + M_\tau, \ \forall \varphi \in \mathcal{D}

where

\displaystyle M_t = \int_0^t e^{-\alpha s} \varphi'(X_s) \sigma (X_s) dW_s.

  • (A1) {\sigma \in C_b(\mathbb{R})}.

Under assumption (A1), {M_t} is a martingale, and {\mathbb{E}[M_{\tau\wedge t}] = 0} for all {\tau\in \mathcal{T}} by optional sampling theorem. This yields

\displaystyle \mathbb{E}_x [e^{-\alpha (\tau\wedge t)} \varphi(X_{\tau\wedge t})] = \varphi(x) + \mathbb{E}_x \Big[\int_0^{\tau\wedge t} e^{-\alpha s} (\mathcal{A} - \alpha) \varphi (X_s) ds\Big], \ \forall \varphi \in \mathcal{D}, \tau \in \mathcal{T}.

Then, taking {\lim_{t\rightarrow \infty}} and using dominated convergence theorem, one obtains

\displaystyle \mathbb{E}_x [e^{-\alpha \tau} \varphi(X_{\tau})] = \varphi(x) + \mathbb{E}_x \Big[\int_0^{\tau} e^{-\alpha s} (\mathcal{A} - \alpha) \varphi (X_s) ds\Big], \ \forall \varphi \in \mathcal{D}, \tau \in \mathcal{T} \ \ \ \ \ (1)

  Next we introduce two measures {\nu, \mu: \mathcal{B}(\mathbb{R}) \mapsto \mathbb{R}} by

\displaystyle \nu_\tau (\Gamma) = \mathbb{E}_x[e^{-\alpha \tau} I_\Gamma(X_\tau)], \quad \mu_\tau(\Gamma) = \mathbb{E}_x \Big[\int_0^\tau e^{-\alpha s} I_\Gamma (X_s) ds \Big], \quad \forall \Gamma \in \mathcal{B}(\mathbb{R}).

Obviously, {\nu_\tau, \mu_\tau \in \mathcal{M}_1}. One can rewrite (1) by

\displaystyle \langle \varphi, \nu_\tau\rangle + \langle (\alpha - \mathcal{A}) \varphi, \mu_\tau \rangle = \langle \varphi, \delta_x\rangle, \ \forall \varphi \in \mathcal{D}, \tau \in \mathcal{T}.

This motivates following linear programing problem:

\displaystyle V_1(x) = \sup_{(\nu, \mu)\in \mathcal{M}_1^2} \{\langle g, \nu \rangle | \langle \varphi, \nu\rangle + \langle (\alpha - \mathcal{A}) \varphi, \mu \rangle = \langle \varphi, \delta_x\rangle, \ \forall \varphi \in \mathcal{D}\}, \quad (LP)

One can write the above (LP) in a standard form. To see that, we first rewrite the constraint by

\displaystyle \langle \varphi, \nu\rangle + \langle \varphi, (\alpha - \mathcal{A}^*)\mu \rangle = \langle \varphi, \delta_x\rangle, \ \forall \varphi \in \mathcal{D},

or shortly

\displaystyle \nu + (\alpha - \mathcal{A}^*)\mu = \delta_x.

Hence,

\displaystyle V_1(x) = \sup_{\mathcal{M}_1^2 \ni X \ge 0} \{ \langle C, X\rangle | A X = b \}

where {C = (g, 0)^T \in \mathcal{D}^2}, {A = (I, \alpha- \mathcal{A}^*): \mathcal{M}_1^2 \rightarrow \mathbb{R}} , and {b = \delta_x}. One can expect

\displaystyle V_1 \ge V.?

Duality formulation suggests

\displaystyle V_2(x) = \inf_{\varphi \in \mathcal{D}} \{ \langle \varphi, b \rangle | A^T \varphi \ge C\} = \inf_{\varphi \in \mathcal{D}} \{ \langle \varphi, b \rangle | (\varphi, (\alpha - \mathcal{A}) \varphi)^T \ge (g, 0)^T \}.

Weak duality suggests

\displaystyle V_1 \le V_2?

How about the strong duality?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s