Ito’s formula

We will review Levy stochastic integrals and Ito’s formula.

Let {X = M + B} be a semimartingale, where {M} is a martingale and {B} is a process of bounded variation. The problem of stochastic integration is to make sense of objects of the form

\displaystyle  \int_0^t F(s) d X(s) = \int_0^t F(s) d M(s) + \int_0^t F(s) d B(s).

If {X} is a Levy process, then there exists {b\in \mathbb{R}^d}, a Brownian motion {W_A} with covariance matrix {A} in {\mathbb{R}^d}, and an independent Poisson random measure {N} on {\mathbb{R}^+\times (\mathbb{R}^d \setminus \{0\})} with Levy measure {\nu}, such that for each {t\ge 0}, Levy-Ito decomposition has the form of

\displaystyle  X(t) = bt + W_A(t) + \int_{|x|<1} x \tilde N (t, dx) + \int_{|x|\ge 1} x N(t,dx). \ \ \ \ \ (1)

Given {E \in \mathcal{B}(\mathbb{R}^d)}, define

\displaystyle \rho((s,t],E) = (t-s) (\delta_0(E) + \nu(E\setminus \{0\})).

Let {\mathcal{P}_2 (T, E)} (resp. {\mathcal{H}_2(T,E)}) be the linear space of all equivalence classes of mappings {F: [0,T] \times E \times \Omega \rightarrow \mathbb{R}} which coincide almost everywhere with respect to {\rho \times P}, and satisfies the following conditions:

  1. {F} is predictable, (see definition here)
  2. {P(\int_0^T \int_E |F(t,x)|^2 \rho(dt, dx) <\infty) = 1.} (resp. {\int_0^T \int_E \mathbb{E}[ |F(t,x)|^2] \rho(dt, dx) <\infty.})

{\mathcal{P}_2(T)} (resp. {\mathcal{H}_2(T)}) is the space of maps {F: [0,T] \times \Omega \rightarrow \mathbb{R}}, which is predictable and {P(\int_0^T |F(t)|^2 dt < \infty) = 1} (resp. {\int_0^T \mathbb{E}[ |F(t)|^2] dt < \infty}). Note that {\mathcal{H}_2(T, E)} and {\mathcal{H}_2(T)} are Hilbert spaces, and {\mathcal{H}_2(T, E) \subset \mathcal{P}_2(T, E)} and {\mathcal{H}_2(T) \subset \mathcal{P}_2(T).}

In this below, we take {E = B_1(0) \setminus \{0\}}. We say that an {\mathbb{R}^d}-valued stochastic process {Y = (Y(t), t\ge 0)} is a Levy-type stochastic integral if it can be written in the following form for each {1\le i \le d}, {t\ge 0}

\displaystyle   \begin{array}{l} Y^i(t) =\displaystyle Y^i(0) + \int_0^t G^i(s) ds + \sum_{j=1}^m \int_0^t F_j^i(s) d W^j(s) \\ \hspace{0.5in} \displaystyle+ \int_0^t \int_{|x|<1} H^i(s,x) \tilde N(ds, dx) + \int_0^t \int_{|x|\ge 1} K^i(s,x) N(ds, dx), \end{array} \ \ \ \ \ (2)

where for each {1\le i \le d, 1\le j \le m}, {t\ge 0}, {|G^i|^{1/2}, F_j^i\in \mathcal{P}_2(T)}, {H^i \in \mathcal{P}_2(T, E)}, and {K} is predictable. {W} is an {m}-dimensional standard Brownian motion and {N} is an independent Poisson random measure on {\mathbb{R}^+ \times (\mathbb{R}^d\setminus \{0\})} with compensator {\tilde N} and intensity measure {\nu}, which we will assume is a Levy measure.

Equivalent notation for Levy-type stochastic integrals of (2) is either

\displaystyle  d Y(t) = G(t) dt + F(t) dW(t) + H(t,x) \tilde N(dt, dx) + K(t,x) N(dt, dx). \ \ \ \ \ (3)

or (to emphasize the domains of integration)

\displaystyle  d Y(t) = G(t) dt + F(t) d W(t) + \int_{|x|<1}H(t,x) \tilde N (dt, dx) + \int_{|x|\ge 1}K(t,x) N(dt, dx).

Clearly, {Y} is semimartingale, but may not be a Levy process.

Consider a Levy-type stochastic integral of the form (3). Let {Y_c} be the continuous part of {Y} defined by

\displaystyle  Y_c^i(t) = \int_0^t G^i(s) ds + \sum_j \int_0^t F_j^i(s) d W^j(s).

Theorem 1 (Ito’s theorem) For each {f\in C^2(\mathbb{R}^d)}, {t\ge 0}, with probability 1,

\displaystyle  \begin{array}{ll} f(Y(t)) - f(Y(0)) = & \displaystyle \sum_i \int_0^t \partial_i f(Y(s-)) d Y_c^i(s) + \frac 1 2 \sum_{i,j} \int_0^t \partial_i \partial_j f(Y(s-)) d [Y_c^i, Y_c^j] (s) \\ & \displaystyle \quad + \int_0^t \int_{|x|\ge 1} [f(Y(s-) + K(s,x)) - f(Y(s-))] N(ds, dx) \\ & \quad \displaystyle + \int_0^t \int_{|x|<1} [f(Y(s-)+ H(s,x)) - f(Y(s-))] \tilde N(ds, dx) \\ & \quad + \displaystyle \int_0^t \int_{|x|<1} [f(Y(s-)+ H(s,x)) - f(Y(s-)) \\ & \quad \hspace{1in} \displaystyle - \sum_i H^i(s,x) \partial_i f(Y(s-))] \nu(dx) ds. \end{array}

Another representation of above Ito formula is that, if {Y} is of (3), then for each {f\in C^2(\mathbb{R}^d), t\ge 0}, with probability 1, we have

\displaystyle  \begin{array}{ll} f(Y(t)) - f(Y(0)) = & \displaystyle \int_0^t \sum_i \partial_i f(Y(s-)) d Y^i(s) + \frac 1 2 \sum_{ij} \partial_i \partial_j f(Y(s-)) d[Y_c^i, Y_c^j](s) \\ & \displaystyle + \sum_{0\le s \le t} [f(Y(s)) - f(Y(s-)) - \sum_i \Delta Y^i(s) \partial_i f(Y(s-))], \end{array}

Note that a special case of Ito’s formula yields the following classical chain rule for differentiable functions {f}, when the process {Y} is of finite variation:

\displaystyle  \begin{array}{ll} f(Y(t)) - f(Y(0)) = & \int_0^t \sum_i \partial_i f(Y(s-)) d Y^i(s) \\ & + \sum_{0\le s \le t} [f(Y(s)) - f(Y(s-)) - \sum_i \Delta Y^i(s) \partial_i f(Y(s-))]. \end{array}

Here is famous Levy characterization of Brownian motion.

Theorem 2 (Levy’s characterization) Let {M} be a continuous centered martingale, which is adapted to a given filtration {(\mathcal{F}_t, t\ge 0)}. If {[M_i, M_j](t) = A_{ij} t} for each {t\ge 0, } {1\le i, j \le d} where {A = (A_{ij})} is a positive definite symmetric matrix, then {M} is an {\mathcal{F}_t}-adapted Brownian motion with covariance {A}.

Proof: Fix {u\in \mathbb{R}^d} and define process {Y_u(t) = e^{i(u, M(t))}}, then by Ito’s formula, we obtain

\displaystyle  \begin{array}{ll} d Y_u(t)& = Y_u(t) d (i \sum_j u^j M^j(t)) + \frac 1 2 Y_u(t) d [ i \sum_j u^j M^j(t), i \sum_j u^j M^j(t)] \\ & = i Y_u(t) \sum_j u^j dM^j(t) - \frac 1 2 Y_u(t) \sum_{i,j} u^i u^j A_{ij} dt \end{array}

By taking integral, and expectation

\displaystyle  \mathbb{E} (Y_u(t)) = 1 - \frac 1 2 (u, Au) \int_0^t \mathbb{E}(Y_u(s) ) ds.

Hence, {\mathbb{E} (Y_u(t)) = \exp\{- \frac 1 2 (u, Au) t\}}. \Box

Now, We define the quadratic variation to the more general case of Levy-type stochastic integrals {Y = (Y(t), t\ge 0)} of the form (3). For each {t\ge 0}, we define a {d\times d} matrix-valued adapted process {[Y, Y]} by the following prescription for its {(i,j)}th entry

\displaystyle  [Y^i, Y^j](t) = [Y^i_c, Y^j_c](t) + \sum_{0\le s \le t} \Delta Y^i(s) \Delta Y^j(s).

Each {[Y^i, Y^j](t)} is almost surely finite, and we have

\displaystyle  \begin{array}{ll} [Y^i, Y^j](t) = & \displaystyle \sum_{k=1}^m \int_0^T F_k^i(s) F_k^j(s) ds + \int_0^t \int_{|x|<1} H^i(s,x) H^j(s,x) N(ds, dx) \\ & \hspace{2in} \displaystyle+ \int_0^t \int_{|x|\ge 1} K^i(s,x) K^j(s,x) N(ds, dx). \end{array}

The term {d[Y^1, Y^2](t)}, which sometimes called an Ito correction, arises as a result of the following formal product relations between differentials:

\displaystyle  d W^i(t) dW^j(t) = \delta^{ij} dt; \quad N(dt, dx) N(dt, dy) = N(dt, dx) \delta(x-y).

Theorem 3 (Ito’s product formula) If {Y^1} and {Y^2} are real-valued Levy-type stochastic integrals of the form (3). Then for all {t\ge 0}, we have that

\displaystyle  Y^1(t) Y^2(t) = Y^1(0) Y^2(0) + \int_0^t Y^1(s-) dY^2(s) + \int_0^t Y^2(s-) d Y^1(s) + [Y^1, Y^2](t).

For completeness, we will give another characterization of quadratic variation which is sometimes quite useful. If {X} and {Y} are real-valued Levy-type stochastic integrals of the form (3), then for each {t\ge 0}, w.p.1, we have

\displaystyle  [X,Y](t) = \lim^P_{n\rightarrow \infty} \sum_{j=0}^{n} (X(t_{j+1}^{(n)}) - X(t_{j}^{(n)})) (Y(t_{j+1}^{(n)}) - Y(t_{j}^{(n)}))

Advertisements

One comment

  1. Pingback: Levy Process-3: Girsanov Theorem and Levy stochastic exponential « 01law's Blog


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s