Processing math: 100%

Chapter 4 - Stochastic differential equations and Feynman-Kac formulas - Exercices


Exercise 4.1 (linear transformation of Brownian motion)
  1. Let W be a standard d-dimensional Brownian motion and let U be an orthogonal matrix (i.e. U^*=U^{-1}). Prove that UW defines a new standard d-dimensional Brownian motion.
  2. Application: let W_1 and W_2 be two independent Brownian motions. For any \rho\in [-1,1], justify that \rho W_1+\sqrt{1-\rho^2}W_2 and -\sqrt{1-\rho^2}W_1+\rho W_2 are two independent Brownian motions.

Exercise 4.2 (approximation of the integral of a stochastic process)
For a standard Brownian motion, we study the convergence rate of the approximation
\Delta I_n:=\int_0^1 W_s {\rm d}s-\frac{ 1 }{n }\sum_{i=0}^{n-1} W_{\frac in}
as n\to+\infty.
  1. We start by a rough estimate. Prove that {\mathbb E}(|\Delta I_n|)\leq \sum_{i=0}^{n-1} {\mathbb E}\left(\int_{\frac in}^{\frac{i+1}n} |W_s-W_{\frac in}|{\rm d}s\right)=O(n^{-1/2}).
  2. Using Lemma A.1.4, prove that \Delta I_n is Gaussian distributed. Compute its parameters and conclude that {\mathbb E}(|\Delta I_n|)=O(n^{-1}).
  3. A more generic proof of the above estimate consists of writing \Delta I_n:=\sum_{i=0}^{n-1} \int_{\frac in}^{\frac{i+1}n} (\frac{ i+1 }{n }-s){\rm d}W_s
    where we have applied the Itô formula to s\mapsto ( \frac{ i+1 }{n }-s)(W_s-W_{\frac in}) on each interval [\frac in,\frac{i+1}n]. Using the Itô isometry, derive {\mathbb E}(|\Delta I_n|^2)= O(n^{-2}) and therefore the announced estimate.
  4. Proceeding as in (3), extend the previous estimate to \int_0^1 X_s{\rm d}s-\frac{ 1 }{n }\sum_{i=0}^{n-1} X_{\frac in}
    where X is a scalar Itô process with bounded coefficients (Definition 4.2.4).

Exercise 4.3 (approximation of stochastic integral)
We consider the convergence rate of the approximation \Delta J_n:=\int_0^1 Z_s {\rm d}W_s-\sum_{i=0}^{n-1} Z_{\frac in} (W_{\frac {i+1}n}-W_{\frac in})
where Z_s:=f(s,W_s) for some function f, such that {\mathbb E}\int_0^1 |Z_s|^2 {\rm d}s+\sup_{i\leq n-1}\mathbb{E}|Z_{\frac in}|^2<+\infty. We illustrate that the convergence order is, under mild conditions, equal to 1/2 but it can be smaller for irregular f.
  1. Show that {\mathbb E}|\Delta J_n|^2= {\mathbb E} \left( \sum_{i=0}^{n-1} \int_{\frac in}^{\frac{i+1}n} |Z_s- Z_{\frac in}|^2 {\rm d}s \right).
  2. When Z_s=W_s, show that {\mathbb E}|\Delta J_n|^2 \sim {\rm Cst}\ n^{-1} for some positive constant.
  3. Assuming that f is bounded, smooth with bounded derivatives, prove that {\mathbb E}|\Delta J_n|^2=O(n^{-1}).
  4. Assume that Z is  a square-integrable martingale. Show that {\mathbb E}|Z_s- Z_{\frac in}|^2\leq {\mathbb E}|Z_{\frac {i+1}n}|^2-{\mathbb E}|Z_{\frac in}|^2, and thus {\mathbb E}|\Delta J_n|^2\leq ({\mathbb E}|Z_{1}|^2-{\mathbb E}|Z_{0}|^2)n^{-1}.
  5. Set Z_s:={\cal N}'(W_s/\sqrt{1-s})/\sqrt{1-s}. Establish that n^{1/2}{\mathbb E}|\Delta J_n|^2 is bounded away from 0, for n large enough.

Exercise 4.4 (exact simulation of Ornstein-Uhlenbeck process)
Two processes (X_t)_{t \ge 0} and (Y_t)_{t \ge 0} have the same distribution if for any n \in \mathbb{N} and any 0 \leq t_1 < \cdots < t_n, the vectors (X_{t_1},...,X_{t_n}) and (Y_{t_1}, ..., Y_{t_n}) have the same distribution. Let us consider the Ornstein-Uhlenbeck process (X_t)_{t\geq 0}, solution of X_t= x_0-a\int_0^t X_s{\rm d}s + \sigma W_t,
where x_0\in \mathbb{R}, \sigma \geq 0, and (W_t)_{t \ge 0} is a standard Brownian motion.
  1. By applying the Itô formula to e^{at}X_t, give an explicit representation for X_t in terms of stochastic integrals.
  2. Deduce the explicit distribution of  (X_{t_1},\cdots,X_{t_n}).
  3. Find two functions \alpha(t) and \beta(t) such that (X_t)_{t\geq 0} has the same distribution as (Y_t)_{t\geq 0} with Y_t=\alpha(t)(x_0+W_{\beta(t)}). Design a scheme for the exact simulation of the Ornstein-Uhlenbeck process.

Exercise 4.5 (Transformations of SDE and PDE)
For any t \in [0,T) and  x \in \mathbb{R}, we denote by (X^{t,x}_s, s \in [t,T]) the solution to X_s = x+\int_t^s b(X_r){\rm d}r + \int_t^s \sigma(X_r){\rm d}W_r, \qquad t \le s \le T
where the coefficients b,\sigma:\mathbb{R} \to \mathbb{R} are smooth with bounded derivatives, and  \sigma(x)\ge c>0.
For a given Borel set A\subset \mathbb{R}, we define u(t,x):={\mathbb P}(X^{t,x}_T \in A). We assume in the following u(t,x)>0 for any (t,x)\in [0,T)\times \mathbb{R}, and that appropriate smoothness assumptions are satisfied  (namely, u \in {\cal C}^{1,2}([0,T)\times \mathbb{R})).
  1. Let x_0 \in \mathbb{R}  and f be a bounded continuous function. Using the PDE satisfied by u on [0,T) \times \mathbb{R}, show that{\mathbb E}[f(X_t)|X_T \in A] = \frac{{\mathbb E}[f(X_t) u(t,X_t)]}{u(0,x_0)}, \qquad \forall t < T,
    where X_t=X^{0,x_0}_t to simplify.
  2. We assume that for any s \le t \le T the equation \begin{align*}\overline{X}_r &= x+\int_s^r\Bigl(b(\overline{X}_w) + \sigma^2(\overline{X}_w) \frac{\partial_x u}{u}(w,\overline{X}_w) \Bigr){\rm d} w \\&+ \int_s^r \sigma(\overline{X}_w){\rm d}W_w, \qquad s \le r \le t \end{align*}
    has a unique solution, denoted by (\overline{X}^{s,x}_r, s \le r \le t). We set v_t(s,x):={\mathbb E}[ f(\overline{X}^{s,x}_t)].
    1. What is the PDE solved by (s,x) \mapsto v_t(s,x) on [0,t) \times\mathbb{R}?
    2. Applying the Itô formula to u(s,X_s) and v_t(s,X_s), 0 \le s \le t, and then to u(s,X_s)v_t(s,X_s), show{\mathbb E}[f(X_t)u(t,X_t)] = v_t(0,x_0)u(0,x_0), \qquad \forall t < T.
    3. Conclude that for any t < T, the distribution of  X_t given \{X_T \in A\} is the  distribution of  \overline{X}^{0,x_0}_t.
  3. In the case b=0, \sigma(x)=1 and A=(y-R,y+R), show that \frac{\partial_x u(t,x)}{u(t,x)} \rightarrow -\frac{x-y}{T-t} for any (t,x) as R \to 0. Interpret the solution to the following equation in terms of a Brownian bridge: \overline{X}_t =x_0 -\int_0^t \frac{\overline{X}_s-y}{T-s}{\rm d}s+ W_t.

Exercise 4.6 (Exit time from a domain)
Consider the solution (X^x_t)_t of a Stochastic Differential Equation in \mathbb{R}^d, starting from x at time 0, with time-independent coefficients (b,\sigma) satisfying the usual Lipschitz conditions of Theorem 4.3.1. Let D be a non-empty open connected set of \mathbb{R}^d and set \tau_D^x = \inf\{t \ge 0 : X^x_t \notin D \}
for the first exit time from D.
  1. Assume first that \sup_{x \in D}{\mathbb E}[\tau^x_D] \le c, for some constant c>0.
    1. Show that {\mathbb E}[(\tau^x_D)^k] \le k! c^k for any k \in \mathbb{N}.
      Hint: use the identity \frac1k T^k =\int_0^T (T-t)^{k-1} {\mathrm d}t and the  Markov property of X^x.
    2. Deduce that \sup_{x \in D} {\mathbb E}[e^{\lambda \tau^x_D}] < \infty for any  \lambda < c^{-1}. What are the consequences of this result on the simulation of the path of X up to \tau^x_D?
    3. Set \gamma(t) = \sup_{x \in D} {\mathbb P}(\tau^x_D > t). Show  \gamma(t+s) \le \gamma(t)\gamma(s) for any t,s \ge 0.
    4. The previous question shows that the function t \mapsto \ln\gamma(t) \in [-\infty,0] is sub-additive: by the Fekete lemma, the limit\lim_{t \to \infty} \frac1t \ln\gamma(t) = \inf_{t >0} \frac1t \ln\gamma(t) =: -\alpha_{D}
      exists in [-\infty,0]. Show that  \alpha_{D} \ge c^{-1}.
  2. Assume now that  D is bounded. The infinitesimal generator of X is denoted by \cal L.
    1. Suppose there exists f \in {\cal C}^2(\mathbb{R}^d,\mathbb{R}) such that f(x) \ge 0 and {\cal L} f(x) \le -1 for x \in D. Show that \sup_{x \in D} {\mathbb E}[\tau^x_D] \le c =: \sup_{y \in D} f(y).
    2. In the case \inf_{x\in D}[\sigma \sigma^*]_{i,i}(x)>0 for some i, exhibit such a function f.

Exercise 4.7 (Bismut-Elworthy-Li formula)
Let X^x be the Ornstein-Uhlenbeck process solution of X_t= x-a\int_0^t X_s{\mathrm d}s + \sigma W_t,
with x\in \mathbb{R} and \sigma>0. We aim at computing $\partial_x
{\mathbb E}(f(X^{x}_T)) for bounded smooth function f$.
  1. Using the sensitivity formula of Theorem 4.5.3, show that \partial_x{\mathbb E}(f(X^{x}_T))={\mathbb E}\Big(f(X^{x}_T)\int_0^{T} \frac{ e^{-at} } {\sigma T}{\mathrm d}W_t\Big).
  2. Show that the sensitivity formula is still valid by replacing \int_0^{T} \frac{ e^{-at} } {\sigma T}{\mathrm d}W_t by its conditional expectation given X^x_T. Compute this conditional expectation explicitly.
  3. Prove that the new formula coincides with that given by the likelihood ratio method using the Gaussian distribution of X^x_T (Proposition 2.2.9).
  4. Which representation among that (1) or (3) has the smallest variance?

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

No comments:

Post a Comment