本文为 I n t r o d u c t i o n Introduction Introduction t o to to P r o b a b i l i t y Probability Probability 的读书笔记
目录
Derived Distributions
Example 4.3.
- Let Y = g ( X ) = X 2 Y = g( X) = X^2 Y=g(X)=X2, where X X X is a random variable with known PDF. For any y ≥ 0 y\geq 0 y≥0, we have
F Y ( y ) = P ( Y ≤ y ) = P ( X 2 ≤ y ) = P ( − y ≤ X ≤ y ) = F X ( y ) − F X ( − y ) \begin{aligned}F_Y(y) &= P(Y\leq y)\\ &= P(X^2\leq y)\\ &= P(-\sqrt y\leq X\leq \sqrt y) \\&=F_X(\sqrt y)-F_X(-\sqrt y)\end{aligned} FY(y)=P(Y≤y)=P(X2≤y)=P(−y≤X≤y)=FX(y)−FX(−y)and therefore, by differentiating and using the chain rule,
f Y ( y ) = 1 2 y f X ( y ) + 1 2 y f X ( − y ) , y ≥ 0 f_Y(y)=\frac{1}{2\sqrt y}f_X(\sqrt y)+\frac{1}{2\sqrt y}f_X(-\sqrt y),\ \ \ \ \ y\geq0 fY(y)=2y1fX(y)+2y1fX(−y), y≥0
Problem 1.
If X X X is a random variable that is uniformly distributed between − 1 -1 −1 and 1 1 1. find the PDF of − l n ∣ X ∣ - ln |X| −ln∣X∣.
SOLUTION
- Let Y = − l n ∣ X ∣ Y = - ln |X| Y=−ln∣X∣. We have, for y ≥ 0 y \geq 0 y≥0,
F Y ( y ) = P ( Y ≤ y ) = P ( l n ∣ X ∣ ≥ − y ) = P ( X ≥ e − y ) + P ( X ≤ − e − y ) = 1 − e − y F_Y (y) = P(Y \leq y) = P(ln |X|\geq -y) = P(X \geq e^{-y}) + P(X\leq -e^{-y}) = 1 - e^{-y} FY(y)=P(Y≤y)=P(ln∣X∣≥−y)=P(X≥e−y)+P(X≤−e−y)=1−e−yand therefore by differentiation
f Y ( y ) = e − y f o r y ≥ 0 f_Y (y) = e^{-y}\ \ \ \ \ \ for\ y\geq 0 fY(y)=e−y for y≥0so Y Y Y is an exponential random variable with parameter 1 1 1. - This exercise provides a method for simulating an exponential random variable using a sample of a uniform random variable.
The Linear Case
- We now focus on the important special case where Y Y Y is a linear function of X X X.
To verify this formula, divide the problem into two cases: a > 0 a > 0 a>0 and a < 0 a < 0 a<0.
Example 4.5. A Linear Function of a Normal Random Variable is Normal.
- Suppose that X X X is a normal random variable with mean μ μ μ and variance σ \sigma σ, and let Y = a X + b Y = aX + b Y=aX+b, where a a a and b b b are scalars, with a ≠ 0 a \neq 0 a=0. We have
f X ( x ) = 1 2 π σ e − ( x − μ ) 2 / 2 σ 2 f_X(x) =\frac{1}{\sqrt{2\pi}\sigma}e^{-(x-\mu)^2/2\sigma^2} fX(x)=2πσ1e−(x−μ)2/2σ2Therefore,
f Y ( y ) = 1 2 π ∣ a ∣ σ e − ( y − b a − μ ) 2 2 σ 2 = 1 2 π ∣ a ∣ σ e − ( y − b − a μ ) 2 2 a 2 σ 2 f_Y(y)=\frac{1}{\sqrt{2\pi}|a|\sigma}e^{-\frac{(\frac{y-b}{a}-\mu)^2}{2\sigma^2}}=\frac{1}{\sqrt{2\pi}|a|\sigma}e^{-\frac{(y-b-a\mu)^2}{2a^2\sigma^2}} fY(y)=2π∣a∣σ1e−2σ2(ay−b−μ)2=2π∣a∣σ1e−2a2σ2(y−b−aμ)2We recognize this as a normal PDF with mean a μ + b aμ+ b aμ+b and variance a 2 σ a^2\sigma a2σ. In particular, Y Y Y is a normal random variable.
The Monotonic Case
单调函数
- The calculation and the formula for the linear case can be generalized to the case where g g g is a monotonic function.
- Let X X X be a continuous random variable and suppose that its range is contained in a certain interval I I I. We consider the random variable Y = g ( X ) Y = g(X) Y=g(X), and assume that g g g is strictly monotonic over the interval I I I. Furtherrnore. we assume that the function g g g is differentiable.
(An important fact is that for a strictly monotonic function g g g, there exists an inverse function h = g − 1 h=g^{-1} h=g−1.)
Functions of Two Random Variables
- The two-step procedure that first calculates the CDF and then differentiates to obtain the PDF also applies to functions of more than one random variable.
Example 4.8.
Let X X X and Y Y Y be independent random variables that are uniformly distributed on the interval [ 0 , 1 ] [0, 1] [0,1]. What is the PDF of the random variable Z = Y / X Z =Y/X Z=Y/X?
SOLUTION
- We consider separately the cases 0 ≤ z ≤ 1 0\leq z\leq 1 0≤z≤1 and z > 1 z >1 z>1. As shown in Fig. 4.5, we have
- By differentiating, we obtain
Example 4.9.
Romeo and Juliet have a date at a given time, and each, independently, will be late by an amount of time that is exponentially distributed with parameter λ \lambda λ. What is the PDF of the difference between their times of arrival?
SOLUTION
- Let us denote by X X X and Y Y Y the amounts by which Romeo and Juliet are late, respectively. We want to find the PDF of Z = X − Y Z = X - Y Z=X−Y, assuming that X X X and Y Y Y are independent and exponentially distributed with parameter λ \lambda λ. We will first calculate the CDF F Z ( z ) F_Z(z) FZ(z) by considering separately the cases z ≥ 0 z\geq0 z≥0 and z < 0 z < 0 z<0 (see Fig. 4.6).
- For z ≥ 0 z\geq0 z≥0, we have (see the left side of Fig. 4.6)
F Z ( z ) = P ( X − Y ≤ z ) = 1 − P ( X − Y > z ) = 1 − ∫ 0 ∞ ( ∫ z + y ∞ f X , Y ( x , y ) d x ) d y = 1 − ∫ 0 ∞ λ e − λ y ( ∫ z + y ∞ λ e − λ x d x ) d y = 1 − ∫ 0 ∞ λ e − λ y e − λ ( z + y ) d y = 1 − 1 2 e − λ z \begin{aligned}F_Z(z)&=P(X-Y\leq z) \\&=1-P(X-Y>z) \\&=1-\int_0^\infty(\int_{z+y}^\infty f_{X,Y}(x,y)dx)dy \\&=1-\int_0^\infty\lambda e^{-\lambda y}(\int_{z+y}^\infty \lambda e^{-\lambda x}dx)dy \\&=1-\int_0^\infty\lambda e^{-\lambda y}e^{-\lambda (z+y)}dy \\&=1-\frac{1}{2}e^{-\lambda z} \end{aligned} FZ(z)=P(X−Y≤z)=1−P(X−Y>z)=1−∫0∞(∫z+y∞fX,Y(x,y)dx)dy=1−∫0∞λe−λy(∫z+y∞λe−λxdx)dy=1−∫0∞λe−λye−λ(z+y)dy=1−21e−λz - For the case z < 0 z < 0 z<0, we can use a similar calculation, but we can also argue using symmetry. Indeed, the symmetry of the situation implies that the random variables Z = X − Y Z = X - Y Z=X−Y and − Z = Y − X -Z = Y - X −Z=Y−X have the same distribution. We have
F Z ( z ) = P ( Z ≤ z ) = P ( − Z ≥ − z ) = P ( Z ≥ − z ) = 1 − F Z ( − z ) F_Z(z)=P(Z\leq z)=P(-Z\geq -z)=P(Z\geq -z)=1-F_Z(-z) FZ(z)=P(Z≤z)=P(−Z≥−z)=P(Z≥−z)=1−FZ(−z)With z < 0 z < 0 z<0, we have − z > 0 -z > 0 −z>0 and using the formula derived earlier,
F Z ( z ) = 1 − F Z ( − z ) = 1 − ( 1 − 1 2 e λ z ) = 1 2 e λ z F_Z(z)=1-F_Z(-z)=1-(1-\frac{1}{2}e^{\lambda z})=\frac{1}{2}e^{\lambda z} FZ(z)=1−FZ(−z)=1−(1−21eλz)=21eλz - Combining the two cases z ≥ 0 z\geq0 z≥0 and z < 0 z < 0 z<0, we obtain
- We now calculate the PDF of Z Z Z by differentiating its CDF. We have
or f Z ( z ) = λ 2 e − λ ∣ z ∣ f_Z(z)=\frac{\lambda}{2}e^{-\lambda|z|} fZ(z)=2λe−λ∣z∣This is known as a two-sided exponential PDF (双边指数概率密度函数), also called the Laplace PDF (拉普拉斯概率密度函数).
Problem 11.
Use the convolution formula to establish that the sum of two independent Poisson random variables with parameters λ \lambda λ and μ μ μ, respectively, is Poisson with parameter λ + μ \lambda+μ λ+μ.
SOLUTION
- The convolution of two Poisson PMFs is of the form
∑ i = 0 k λ i e − λ i ! ⋅ μ k − i e − μ ( k − i ) ! = e − ( λ + μ ) ∑ i = 0 k λ i μ k − i i ! ( k − i ) ! \sum_{i=0}^k\frac{\lambda^ie^{-\lambda}}{i!}\cdot \frac{μ^{k-i}e^{-μ}}{(k-i)!}=e^{-(\lambda+μ)}\sum_{i=0}^k\frac{\lambda^iμ^{k-i}}{i!(k-i)!} i=0∑ki!λie−λ⋅(k−i)!μk−ie−μ=e−(λ+μ)i=0∑ki!(k−i)!λiμk−iWe have
( λ + μ ) k = ∑ i = 0 k ( k i ) λ i μ k − i = ∑ i = 0 k k ! i ! ( k − i ) ! λ i μ k − i (\lambda+μ)^k=\sum_{i=0}^k\begin{pmatrix}k\\i\end{pmatrix}\lambda^i\mu^{k-i}=\sum_{i=0}^k\frac{k!}{i!(k-i)!}\lambda^i\mu^{k-i} (λ+μ)k=i=0∑k(ki)λiμk−i=i=0∑ki!(k−i)!k!λiμk−iThus, the desired PMF is
e − ( λ + μ ) k ! ∑ i = 0 k ( k i ) λ i μ k − i = e − ( λ + μ ) k ! ( λ + μ ) k , \frac{e^{-(\lambda+μ)}}{k!}\sum_{i=0}^k\begin{pmatrix}k\\i\end{pmatrix}\lambda^iμ^{k-i}=\frac{e^{-(\lambda+μ)}}{k!}(\lambda+μ)^k, k!e−(λ+μ)i=0∑k(ki)λiμk−i=k!e−(λ+μ)(λ+μ)k,which is a Poisson PMF with mean λ + μ \lambda+μ λ+μ.
Problem 16. The polar coordinates of two independent normal random variables.
Let X X X and Y Y Y be independent standard normal random variables. The pair ( X , Y ) (X, Y) (X,Y) can be described in polar coordinates in terms of random variables R ≥ 0 R\geq0 R≥0 and Θ ∈ [ 0 , 2 π ] \Theta\in [0,2\pi] Θ∈[0,2π], so that
X = R cos Θ , Y = R sin Θ X = R\cos \Theta ,\ \ \ \ \ \ \ \ Y = R\sin\Theta X=RcosΘ, Y=RsinΘ
- (a) Show that Θ \Theta Θ is uniformly distributed in [ 0 , 2 π ] [0, 2\pi] [0,2π], that R R R has the PDF
f R ( r ) = r e − r 2 / 2 , r ≥ 0 f_R(r)=re^{-r^2/2},\ \ \ \ \ \ r\geq0 fR(r)=re−r2/2, r≥0and that R R R and Θ \Theta Θ are independent.
(The random variable R R R is said to have a Rayleigh distribution. (瑞利分布)) - (b) Show that R 2 R^2 R2 has an exponential distribution with parameter 1 / 2 1 /2 1/2.
[Note: Using the results in this problem, we see that samples of a normal random variable can be generated using samples of independent uniform and exponential random variables.]
SOLUTION
- (a) We first find the joint CDF of R R R and Θ \Theta Θ. Fix some r > 0 r > 0 r>0 and some θ ∈ [ 0.2 π ] \theta\in [0. 2\pi] θ∈[0.2π], and let A A A be the set of points ( x , y ) (x, y) (x,y) whose polar coordinates ( r ‾ , θ ‾ ) (\overline r,\overline \theta) (r,θ) satisfy 0 ≤ r ‾ ≤ r 0\leq\overline r\leq r 0≤r≤r and 0 ≤ θ ‾ ≤ θ 0\leq\overline \theta\leq\theta 0≤θ≤θ; note that the set A A A is a sector of a circle of radius r r r. with angle θ \theta θ. We have
F R , Θ ( r , θ ) = P ( R ≤ r , Θ ≤ θ ) = P ( ( X , Y ) ∈ A ) = 1 2 π ∫ ∫ ( x , y ) ∈ A e − x 2 + y 2 2 d x d y = 1 2 π ∫ 0 θ ∫ 0 r e − r ‾ 2 2 r ‾ d r ‾ d θ ‾ \begin{aligned}F_{R,\Theta}(r,\theta)&=P(R\leq r,\Theta\leq\theta)=P((X,Y)\in A) \\&=\frac{1}{2\pi}\int\int_{(x,y)\in A}e^{-\frac{x^2+y^2}{2}}dxdy \\&=\frac{1}{2\pi}\int_0^\theta\int_0^re^{-\frac{\overline r^2}{2}}\overline rd\overline rd\overline \theta\end{aligned} FR,Θ(r,θ)=P(R≤r,Θ≤θ)=P((X,Y)∈A)=2π1∫∫(x,y)∈Ae−2x2+y2dxdy=2π1∫0θ∫0re−2r2rdrdθWe then differentiate, to find that
f R , Θ ( r , θ ) = ∂ 2 F R , Θ ( r , θ ) ∂ r ∂ θ = r 2 π e − r 2 / 2 , r ≥ 0 f_{R,\Theta}(r,\theta)=\frac{\partial^2F_{R,\Theta}(r,\theta)}{\partial r\partial\theta}=\frac{r}{2\pi}e^{-r^2/2},\ \ \ \ \ r\geq0 fR,Θ(r,θ)=∂r∂θ∂2FR,Θ(r,θ)=2πre−r2/2, r≥0Thus,
f R ( r ) = ∫ 0 2 π f R , Θ ( r , θ ) d θ = r e − r 2 / 2 , r ≥ 0 f_R(r)=\int_0^{2\pi}f_{R,\Theta}(r,\theta)d\theta=re^{-r^2/2},\ \ \ \ \ r\geq0 fR(r)=∫02πfR,Θ(r,θ)dθ=re−r2/2, r≥0Furthermore,
f Θ ∣ R ( θ ∣ r ) = f R , Θ ( r , θ ) f R ( r ) = 1 2 π , θ ∈ [ 0 , 2 π ] f_{\Theta|R}(\theta|r)=\frac{f_{R,\Theta}(r,\theta)}{f_R(r)}=\frac{1}{2\pi},\ \ \ \ \ \theta\in[0,2\pi] fΘ∣R(θ∣r)=fR(r)fR,Θ(r,θ)=2π1, θ∈[0,2π]Since the conditional PDF f Θ ∣ R ( θ ∣ r ) f_{\Theta|R}(\theta|r) fΘ∣R(θ∣r) is unaffected by the value of the conditioning variable R R R. it follows that it is also equal to the unconditional PDF f Θ f_\Theta fΘ. In particular, f Θ ∣ R ( θ ∣ r ) = f R ( r ) f Θ ( θ ) f_{\Theta|R}(\theta|r)=f_R(r)f_\Theta(\theta) fΘ∣R(θ∣r)=fR(r)fΘ(θ), so that R R R and Θ \Theta Θ are independent. - (b) Let t ≥ 0 t\geq0 t≥0. We have
P ( R 2 ≥ t ) = P ( R ≥ t ) = ∫ t ∞ r e − r 2 / 2 d r = ∫ t / 2 ∞ e − u d u = e − t / 2 P(R^2\geq t)=P(R\geq\sqrt t)=\int_{\sqrt t}^\infty re^{-r^2/2}dr=\int_{t/2}^\infty e^{-u}du=e^{-t/2} P(R2≥t)=P(R≥t)=∫t∞re−r2/2dr=∫t/2∞e−udu=e−t/2where we have used the change of variables u = r 2 / 2 u = r^2 /2 u=r2/2. By differentiating, we obtain
f R 2 ( t ) = 1 2 e − t / 2 , t ≥ 0 f_{R^2}(t)=\frac{1}{2}e^{-t/2},\ \ \ \ \ t\geq0 fR2(t)=21e−t/2, t≥0
Sums of Independent Random Variables – Convolution (卷积)
- For some initial insight, we start by deriving a PMF formula for the case where X X X and Y Y Y are discrete. Let Z = X + Y Z = X + Y Z=X+Y, where X X X and Y Y Y are independent integer-valued random variables with PMFs p X p_X pX and p Y p_Y pY, respectively. Then, for any integer z z z,
p Z ( z ) = P ( X + Y = z ) = ∑ { ( x , y ) ∣ x + y = z } P ( X = x , Y = y ) = ∑ x P ( X = x , Y = z − x ) = ∑ x p X ( x ) p Y ( z − x ) \begin{aligned}p_Z(z)&=P(X+Y=z) \\&=\sum_{\{(x,y)|x+y=z\}}P(X=x,Y=y) \\&=\sum_xP(X=x,Y=z-x) \\&=\sum_xp_X(x)p_Y(z-x) \end{aligned} pZ(z)=P(X+Y=z)={ (x,y)∣x+y=z}∑P(X=x,Y=y)=x∑P(X=x,Y=z−x)=x∑pX(x)pY(z−x)The resulting PMF p Z p_Z pZ is called the convolution of the PMFs of X X X and Y Y Y.
- Suppose now that X X X and Y Y Y are independent continuous random variables with PDFs f X f_X fX and f Y f_Y fY, respectively. We wish to find the PDF of Z = X + Y Z = X + Y Z=X+Y. Towards this goal, we will first find the joint PDF of X X X and Z Z Z, and then integrate to find the PDF of Z Z Z.
P ( Z ≤ z ∣ X = x ) = P ( X + Y ≤ z ∣ X = x ) = P ( x + Y ≤ z ∣ X = x ) = P ( x + Y ≤ z ) = P ( Y ≤ z − x ) \begin{aligned}P(Z\leq z |X= x) &= P(X + Y\leq z |X= x)\\ &= P(x + Y\leq z |X= x)\\ &= P(x + Y\leq z)\\ &= P(Y\leq z - x)\end{aligned} P(Z≤z∣X=x)=P(X+Y≤z∣X=x)=P(x+Y≤z∣X=x)=P(x+Y≤z)=P(Y≤z−x)where the third equality follows from the independence of X X X and Y Y Y. By differentiating both sides with respect to z z z, we see that f Z ∣ X ( z ∣ x ) = f Y ( z − x ) f_{Z|X}(z|x )= f_Y(z-x ) fZ∣X(z∣x)=fY(z−x). Using the multiplication rule, we have
f X , Z ( x , z ) = f X ( x ) f Z ∣ X ( z ∣ x ) = f X ( x ) f Y ( z − x ) f_{X,Z}(x, z) = f_X(x)f_{Z|X}(z |x)= f_X(x)f_Y(z - x) fX,Z(x,z)=fX(x)fZ∣X(z∣x)=fX(x)fY(z−x)from which we finally obtain
f Z ( z ) = ∫ − ∞ ∞ f X , Z ( x , z ) d x = ∫ − ∞ ∞ f X ( x ) f Y ( z − x ) d x f_Z(z)=\int_{-\infty}^\infty f_{X,Z}(x,z)dx=\int_{-\infty}^\infty f_X(x)f_Y(z - x)dx fZ(z)=∫−∞∞fX,Z(x,z)dx=∫−∞∞fX(x)fY(z−x)dx
Example 4.10.
- The random variables X X X and Y Y Y are independent and uniformly distributed in the interval [ 0 , 1 ] [0, 1] [0,1]. The PDF of Z = X + Y Z = X + Y Z=X+Y is
f Z ( z ) = ∫ − ∞ ∞ f X ( x ) f Y ( z − x ) d x f_Z(z)=\int_{-\infty}^\infty f_X(x)f_Y(z - x)dx fZ(z)=∫−∞∞fX(x)fY(z−x)dx - The integrand f X ( x ) f Y ( z − x ) f_X(x)f_Y(z - x) fX(x)fY(z−x) is nonzero (and equal to 1) for 0 ≤ x ≤ 1 0\leq x\leq 1 0≤x≤1 and 0 ≤ z − x ≤ 1 0\leq z - x\leq 1 0≤z−x≤1. Combining these two inequalities, the integrand is nonzero for m a x { 0 , z − 1 } ≤ x ≤ m i n { 1 , z } max\{0, z - 1\}\leq x \leq min\{1, z\} max{
0,z−1}≤x≤min{
1,z}. Thus,
Example 4.11. The Sum of Two Independent Normal Random Variables is Normal.
- Let X X X and Y Y Y be independent normal random variables with means μ x , μ y μ_x, μ_y μx,μy, and variances σ x 2 , σ y 2 \sigma_x^2,\sigma_y^2 σx2,σy2, respectively. and let Z = X + Y Z = X + Y Z=X+Y. We have
f Z ( z ) = ∫ − ∞ ∞ f X ( x ) f Y ( z − x ) d x = ∫ − ∞ ∞ 1 2 π σ x e x p { − ( x − μ x ) 2 2 σ x 2 } 1 2 π σ y e x p { − ( z − x − μ y ) 2 2 σ y 2 } d x = 1 2 π ( σ x 2 + σ y 2 ) e x p { − ( z − μ x − μ y ) 2 2 ( σ x 2 + σ y 2 ) } \begin{aligned}f_Z(z)&=\int_{-\infty}^\infty f_X(x)f_Y(z-x)dx \\&=\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_x}exp\{-\frac{(x-\mu_x)^2}{2\sigma_x^2}\}\frac{1}{\sqrt{2\pi}\sigma_y}exp\{-\frac{(z-x-\mu_y)^2}{2\sigma_y^2}\}dx \\&=\frac{1}{\sqrt{2\pi(\sigma_x^2+\sigma_y^2)}}exp\{-\frac{(z-\mu_x-\mu_y)^2}{2(\sigma_x^2+\sigma_y^2)}\} \end{aligned} fZ(z)=∫−∞∞fX(x)fY(z−x)dx=∫−∞∞2πσx1exp{ −2σx2(x−μx)2}2πσy1exp{ −2σy2(z−x−μy)2}dx=2π(σx2+σy2)1exp{ −2(σx2+σy2)(z−μx−μy)2}Given that scalar multiples of normal random variables are also normal, it follows that a X + b Y aX + bY aX+bY is also normal, for any nonzero a a a and b b b.
Example 4.12. The Difference of Two Independent Random Variables.
- The convolution formula can also be used to find the PDF of X − Y X - Y X−Y, when X X X and Y Y Y are independent, by viewing X − Y X - Y X−Y as the sum of X X X and − Y -Y −Y. We observe that the PDF of − Y -Y −Y is given by f − Y ( y ) = f Y ( − y ) f_{-Y}(y) = f_Y(-y) f−Y(y)=fY(−y), and obtain
f X − Y ( z ) = ∫ − ∞ ∞ f X ( x ) f − Y ( z − x ) d x = ∫ − ∞ ∞ f X ( x ) f Y ( x − z ) d x f_{X-Y}(z)=\int_{-\infty}^\infty f_X(x)f_{-Y}(z-x)dx=\int_{-\infty}^\infty f_X(x)f_{Y}(x-z)dx fX−Y(z)=∫−∞∞fX(x)f−Y(z−x)dx=∫−∞∞fX(x)fY(x−z)dx
Problem 12.
The random variables X , Y , X, Y, X,Y, and Z Z Z are independent and uniformly distributed between zero and one. Find the PDF of X + Y + Z X + Y + Z X+Y+Z.
SOLUTION
- Let V = X + Y V = X + Y V=X+Y. As in Example 4.10, the PDF of V V V is
- Let W = X + Y + Z = V + Z W = X + Y + Z = V + Z W=X+Y+Z=V+Z. We convolve the PDFs f V f_V fV and f Z f_Z fZ, to obtain
f W ( w ) = ∫ f V ( v ) f Z ( w − v ) d v f_W(w)=\int f_V(v)f_Z(w-v)dv fW(w)=∫fV(v)fZ(w−v)dvWe first need to determine the limits of the integration. We see that the integrand can be nonzero only if
- We consider three separate cases. If w ≤ 1 w \leq1 w≤1, we have
f W ( w ) = ∫ 0 w f V ( v ) f Z ( w − v ) d v = ∫ 0 w v d v = w 2 2 f_W(w)=\int_0^w f_V(v)f_Z(w-v)dv=\int_0^wvdv=\frac{w^2}{2} fW(w)=∫0wfV(v)fZ(w−v)dv=∫0wvdv=2w2 - If 1 ≤ w ≤ 2 1 \leq w \leq 2 1≤w≤2, we have
f W ( w ) = ∫ w − 1 w f V ( v ) f Z ( w − v ) d v = ∫ w − 1 1 v d v + ∫ 1 w ( 2 − v ) d v = 1 2 − ( w − 1 ) 2 2 − ( w − 2 ) 2 2 + 1 2 \begin{aligned}f_W(w)&=\int_{w-1}^wf_V(v)f_Z(w-v)dv \\&=\int_{w-1}^1vdv+\int_1^w(2-v)dv \\&=\frac{1}{2}-\frac{(w-1)^2}{2}-\frac{(w-2)^2}{2}+\frac{1}{2}\end{aligned} fW(w)=∫w−1wfV(v)fZ(w−v)dv=∫w−11vdv+∫1w(2−v)dv=21−2(w−1)2−2(w−2)2+21 - Finally, if 2 ≤ w ≤ 3 2 \leq w \leq 3 2≤w≤3, we have
f W ( w ) = ∫ w − 1 2 f V ( v ) f Z ( w − v ) d v = ∫ w − 1 2 ( 2 − v ) d v = ( 3 − w ) 2 2 f_W(w)=\int_{w-1}^2 f_V(v)f_Z(w-v)dv=\int_{w-1}^2 (2-v)dv=\frac{(3-w)^2}{2} fW(w)=∫w−12fV(v)fZ(w−v)dv=∫w−12(2−v)dv=2(3−w)2 - To summarize,
Problem 13.
Consider a PDF that is positive only within an interval [ a , b ] [a, b] [a,b] and is symmetric around the mean ( a + b ) / 2 (a+ b)/2 (a+b)/2. Let X X X and Y Y Y be independent random variables that both have this PDF. Suppose that you have calculated the PDF of X + Y X + Y X+Y. How can you easily obtain the PDF of X − Y X - Y X−Y?
- We have X − Y = X + Z − ( a + b ) X -Y = X +Z -(a+b) X−Y=X+Z−(a+b), where Z = a + b − Y Z = a+b-Y Z=a+b−Y is distributed identically with X X X and Y Y Y . Thus, the PDF of X + Z X + Z X+Z is the same as the PDF of X + Y X + Y X+Y, and the PDF of X − Y X - Y X−Y is obtained by shifting the PDF of X + Y X + Y X+Y to the left by a + b a + b a+b.
Graphical Calculation of Convolutions
- Consider two PDFs f X ( t ) f_X(t) fX(t) and f Y ( t ) f_Y(t) fY(t). For a fixed value of z z z, the graphical evaluation of the convolution
f Z ( z ) = ∫ − ∞ ∞ f X ( t ) f Y ( z − t ) d t f_Z(z)=\int_{-\infty}^\infty f_X(t)f_Y(z-t)dt fZ(z)=∫−∞∞fX(t)fY(z−t)dtconsists of the following steps:- ( a ) (a) (a) We plot f Y ( z − t ) f_Y(z-t) fY(z−t) as a function of t t t. This plot has the same shape as the plot of f Y ( t ) f_Y(t) fY(t) except that it is first “flipped”. and then shifted by an amount z z z. If z > 0 z > 0 z>0. this is a shift to the right, if z < 0 z < 0 z<0. this is a shift to the left.
- ( b ) (b) (b) We place the plots of f X ( t ) f_X(t) fX(t) and f Y ( z − t ) f_Y(z - t) fY(z−t) on top of each other, and form their product.
- ( c ) (c) (c) We calculate the value of f Z ( z ) f_Z(z) fZ(z) by calculating the integral of the product of these two plots.