Chapter 3 (General Random Variables): Normal Random Variables (正态随机变量)

本文为 I n t r o d u c t i o n Introduction Introduction t o to to P r o b a b i l i t y Probability Probability 的读书笔记

Normal Random Variables

  • A continuous random variable X X X is said to be normal or Gaussian if it has a PDF of the form
    f X ( x ) = 1 2 π σ e − ( x − μ ) 2 2 σ 2 f_X(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}} fX(x)=2π σ1e2σ2(xμ)2where μ μ μ and σ \sigma σ are two scalar parameters characterizing the PDF, with σ \sigma σ assumed positive. It can be verified that the normalization property holds
    1 2 π σ ∫ − ∞ ∞ e − ( x − μ ) 2 2 σ 2 d x = 1 \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx=1 2π σ1e2σ2(xμ)2dx=1
    在这里插入图片描述
  • The mean and the variance can be calculated to be
    E [ X ] = μ v a r ( X ) = σ 2 E[X]=\mu\\ var(X)=\sigma^2 E[X]=μvar(X)=σ2To see this, note that the PDF is symmetric around μ μ μ, so the mean can only be μ μ μ. Furthermore, the variance is given by
    v a r ( X ) = 1 2 π σ ∫ − ∞ ∞ ( x − μ ) 2 e − ( x − μ ) 2 2 σ 2 d x var(X)=\frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty (x-\mu)^2e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx var(X)=2π σ1(xμ)2e2σ2(xμ)2dxUsing the change of variables y = ( x − μ ) / σ y = (x -μ)/\sigma y=(xμ)/σ and integration by parts, we have
    v a r ( X ) = σ 2 2 π ∫ − ∞ ∞ y 2 e − y 2 2 d y = σ 2 2 π ( − y e − y 2 / 2 ) ∣ − ∞ ∞ + σ 2 2 π ∫ − ∞ ∞ e − y 2 / 2 d y = σ 2 2 π ∫ − ∞ ∞ e − y 2 / 2 d y = σ 2            ( n o r m a l i z a t i o n   p r o p e r t y   o f   t h e   n o r m a l   P D F ) \begin{aligned}var(X)&=\frac{\sigma^2}{\sqrt{2\pi}}\int_{-\infty}^\infty y^2e^{-\frac{y^2}{2}}dy \\&=\frac{\sigma^2}{\sqrt{2\pi}}(-ye^{-y^2/2})\Big|^{\infty}_{-\infty}+\frac{\sigma^2}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-y^2/2}dy \\&=\frac{\sigma^2}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-y^2/2}dy \\&=\sigma^2\ \ \ \ \ \ \ \ \ \ (normalization\ property\ of\ the\ normal\ PDF) \end{aligned} var(X)=2π σ2y2e2y2dy=2π σ2(yey2/2)+2π σ2ey2/2dy=2π σ2ey2/2dy=σ2          (normalization property of the normal PDF)
    在这里插入图片描述
    在这里插入图片描述

The Standard Normal Random Variable

标准正态随机变量

  • A normal random variable Y Y Y with zero mean and unit variance is said to be a standard normal. Its CDF is denoted by Φ \Phi Φ:
    Φ ( y ) = P ( Y ≤ y ) = P ( Y < y ) = 1 2 π ∫ − ∞ y e − t 2 2 d t \Phi(y)=P(Y\leq y)=P(Y<y)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^ye^{-\frac{t^2}{2}}dt Φ(y)=P(Yy)=P(Y<y)=2π 1ye2t2dt
  • It is recorded in a table
    在这里插入图片描述

  • Let X X X be a normal random variable with mean μ μ μ and variance σ 2 \sigma^2 σ2. We “standardize X X X by defining a new random variable Y Y Y given by
    Y = X − μ σ Y= \frac{X-μ}{\sigma} Y=σXμSince Y Y Y is a linear function of X X X, it is normal. Furthermore,
    E [ Y ] = E [ X ] − μ σ = 0 v a r ( Y ) = v a r ( X ) σ 2 = 1 E[Y]=\frac{E[X]-\mu}{\sigma}=0\\ var(Y)=\frac{var(X)}{\sigma^2}=1 E[Y]=σE[X]μ=0var(Y)=σ2var(X)=1Thus, Y Y Y is a standard normal random variable.
  • This fact allows us to calculate the probability of any event defined in terms of X X X: we redefine the event in terms of Y Y Y, and then use the standard normal table.
    在这里插入图片描述

  • Normal random variables play an important role in a broad range of probabilistic models. Mathematically, the key fact is that the sum of a large number of independent and identically distributed (not necessarily normal) random variables has an approximately normal CDF. This property is captured in the celebrated c e n t r a l central central l i m i t limit limit t h e o r e m theorem theorem. (中心极限定理)
    • Normal random variables are often used in signal processing and communications engineering to model noise and unpredictable distortions of signals.

Example 3.8. Signal Detection.
A binary message is transmitted as a signal s s s, which is either − 1 -1 1 or + 1 +1 +1. The communication channel corrupts the transmission with additive normal noise with mean μ = 0 μ= 0 μ=0 and variance σ 2 \sigma^2 σ2 . The receiver concludes that the signal − 1 -1 1 (or + 1 + 1 +1) was transmitted if the value received is < 0 < 0 <0 (or ≥ 0 \geq0 0), respectively). What is the probability of error?

SOLUTION

  • An error occurs whenever − 1 -1 1 is transmitted and the noise N N N is at least 1 1 1, or whenever + 1 + 1 +1 is transmitted and the noise N N N is smaller than − 1 -1 1. In the former case. the probability of error is
    P ( N ≥ 1 ) = 1 − P ( N < 1 ) = 1 − P ( N − μ σ < 1 − μ σ ) = 1 − Φ ( 1 − μ σ ) = 1 − Φ ( 1 σ ) \begin{aligned}P(N\geq1)&=1-P(N<1)=1-P(\frac{N-\mu}{\sigma}<\frac{1-\mu}{\sigma}) \\&=1-\Phi(\frac{1-\mu}{\sigma})=1-\Phi(\frac{1}{\sigma})\end{aligned} P(N1)=1P(N<1)=1P(σNμ<σ1μ)=1Φ(σ1μ)=1Φ(σ1)In the latter case. the probability of error is the same, by symmetry.

猜你喜欢

转载自blog.csdn.net/weixin_42437114/article/details/113782721