Processing math: 100%

Chapter 1 - Generating random variables - Exercises


Exercise 1.1 (inversion method) 
Prove the above generation schemes.
  1. Exponential distribution. Let \lambda>0. Then X=-\frac{1}{\lambda}\log(U)
    has an exponential distribution with parameter \lambda (denoted  by {\cal E}xp(\lambda)) which has the density \lambda e^{-\lambda x}\mathbb{1}_{x\geq 0}.
  2. Geometric distribution. The random variable X\overset d= 1+\lfloor Y \rfloor
    where Y\overset d= {\cal E}xp(\lambda) has a geometric distribution with parameter p such that \lambda=-\log(1-p) (\mathbb{P}(X=n)=p(1-p)^{n-1} for n\geq 1).
  3. Cauchy distribution. Let \sigma>0. Then X=\sigma \tan\big(\pi(U-\frac 12)\big)
    is a Cauchy random variable with parameter \sigma, whose density is \frac{ \sigma }{\pi(x^2+\sigma^2) }\mathbb{1}_{x\in \mathbb{R}}.
  4. Rayleigh distribution. Let  \sigma>0. Then X=\sigma \sqrt{-2\log U}
    is a Rayleigh random variable with parameter \sigma, whose density is \frac{ x }{\sigma^2 }e^{-\frac{ x^2 }{2\sigma^2 }}\mathbb{1}_{x\geq 0}.
  5. Pareto distribution. Let (a, b)\in]0,+\infty[^2. Then X=\frac{ b }{U^{\frac{ 1 }{a }} }
    is a Pareto random variable with parameters (a,b), whose density is  \frac{a b^a}{x^{a+1}}\mathbb{1}_{x\geq b }.
  6. Weibull distribution. Let (a, b)\in]0,+\infty[^2. Then X=b(-\log U)^{\frac{ 1 }{a }}
    is a Weibull random variable with parameters (a,b), whose density is \frac{a}{ b^a}x^{a-1}e^{-(x/b)^a}\mathbb{1}_{x\geq 0}.
  7. Triangular distribution. (1 - \sqrt{U}) with U\overset{d}= {\cal U}([0,1]) has the triangular distribution on [0,1] (with density 2(1 -x)\mathbb{1}_{[0,1]}).

Exercise 1.2 (Box-Muller transform)
Let X and Y be two independent standard Gaussian random variables. Define (R,\theta) as the polar coordinates of (X,Y):
X=R\cos(\theta), \quad Y=R\sin(\theta)
with R\geq 0 and \theta\in[0,2\pi[.
Prove that R^2 and \theta are two independent random variables, the first one has the distribution of {\cal E}xp(\frac 12), the second one is uniformly distributed on [0,2\pi].
Solution.

Exercise 1.3 (acceptance-rejection method)
Proposition I.3.2 serves to design the acceptance-rejection method and it is stated as follows. Let X and Y be two random variables with values in \mathbb{R}^d, whose densities with respect to a reference measure \mu are f and g respectively. Suppose that there exists a constant c(\geq 1) satisfying c\;g(x)\geq f(x)\quad \mu-\mbox{a.e.}. Let U be a random variable uniformly distributed on [0,1] and independent of Y: then, the distribution of  Y given
\{c\; U \; g(Y)< f(Y) \} is the distribution X.
Here we study a variant of the above result. Let c>0. Show the following statements.
  1. Let Y be a d-dimensional random variable with density g and let U\overset{d}= {\cal U}([0,1]) independent of Y. Then, (Y, cUg(Y)) is a random vector uniformly distributed on A_{cg} = \{(x,z) \in {\mathbb R}^{d}\times {\mathbb R}: 0 \leq z \leq cg(x) \}.
     
  2. Conversely, if (Y,Z) is uniformly distributed on A_{cg}, then the distribution of Y has a density equal to g
From the above, deduce another proof of Proposition I.3.2 when the reference measure \mu is the Lebesgue measure.

Exercise 1.4 (acceptance-rejection method)
Show that the following algorithm generates a standard Gaussian random variable.
  x: double;
  u: double;
  Repeat
     x \leftarrow simulation according to {\cal E}xp(1);
     u \leftarrow simulation according to {\cal U}([-1,1]), independent of x;
  Until (x-1)^2\leq -2 \times \log(|u|)
  Return x if u>0 and -x otherwise
Solution.

Exercise 1.5 (acceptance-rejection method)
What is the output distribution of the following algorithm?
  u: double; 
  v: double; 
  Repeat
     u \leftarrow simulation according to {\cal U}([-1,1]);
     v \leftarrow simulation according to {\cal U}([-1,1]), independent of u;
  Until (1+v^2)\times |u|\leq 1
  Return v if u>0 and 1/v otherwise

Exercise 1.6 (ratio-of-uniforms method, Gamma distribution)
Using the ratio-of-uniforms method, design an algorithm for simulating the Gamma distribution \Gamma(a,\theta) (a\geq 1,\theta>0) which density is p_{a,\theta}(z)=\frac{\theta^a z^{a-1}}{\Gamma(a)}e^{-\theta z}1_{\{z>0\}}.
Hint: first reduce to the case \theta=1.
Solution.

Exercise 1.7 (ratio-of-uniforms method)
Generalize Lemma 1.3.6 to the multidimensional case. Make the algorithm explicit in the case of the two-dimensional density p(x,y) proportional to f(x,y)=(1+x^2+2 y^2)^{-4/3}.


Exercise 1.8 (Gaussian copula)
Write a simulation program for generating a bi-dimensional vector with Laplace marginals and Gaussian copula (like for Figure 1.2).

Exercise 1.9 (Archimedean copula)
Let C be the Archimedean copula
C(u_1,...,u_d)=\phi^{-1}(\phi(u_1)+...+\phi(u_d))

associated with the random variable Y (with the Laplace transform \phi^{-1}(u)=\mathbb{E}(e^{-uY})), and suppose that Y>0 a.s. Let (X_i)_{1\leq i\leq d} be independent random variables with the uniform distribution [0,1] and Y be a random variable independent of (X_i)_i. Define
U_i=\phi^{-1}\Big(-\frac 1 Y \log (X_i)\Big).

Prove that the vector (U_1,\dots,U_d) has uniform marginal distributions and that its copula is C.
Solution.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

No comments:

Post a Comment