CHAPTER 2
3
3
3
3
j=1
P (A1) =
P (A2) =
P (A3) =
P (A4) =
j=1
j=1
P (A2, Bj) = 0.05 + 0.03 + 0.09 = 0.17
P (A3, Bj) = 0.05 + 0.12 + 0.14 = 0.31
Problem 2.1 :
Hence :
3
j=1
P (Ai, Bj), i = 1, 2, 3, 4
P (Ai) =
P (A1, Bj) = 0.1 + 0.08 + 0.13 = 0.31
P (A4, Bj) = 0.11 + 0.04 + 0.06 = 0.21
Similarly :
P (B1) =
P (B2) =
P (B3) =
i=1
4
4
4
i=1
j=1
P (Ai, B1) = 0.10 + 0.05 + 0.05 + 0.11 = 0.31
P (Ai, B2) = 0.08 + 0.03 + 0.12 + 0.04 = 0.27
P (Ai, B3) = 0.13 + 0.09 + 0.14 + 0.06 = 0.42
i=1
Problem 2.2 :
The relationship holds for n = 2 (2-1-34) : p(x1, x2) = p(x2|x1)p(x1)
Suppose it holds for n = k, i.e : p(x1, x2, ..., xk) = p(xk|xk−1, ..., x1)p(xk−1|xk−2, ..., x1) ...p(x1)
Then for n = k + 1 :
p(x1, x2, ..., xk, xk+1) = p(xk+1|xk, xk−1, ..., x1)p(xk, xk−1..., x1)
= p(xk+1|xk, xk−1, ..., x1)p(xk|xk−1, ..., x1)p(xk−1|xk−2, ..., x1) ...p(x1)
Hence the relationship holds for n = k + 1, and by induction it holds for any n.
1
课后答案网 www.khdaw.com
Problem 2.3 :
Following the same procedure as in example 2-1-1, we prove :
y − b
a
pY (y) =
1|a| pX
Problem 2.4 :
Relationship (2-1-44) gives :
pY (y) =
pX
y − b
a
1/3
1
3a [(y − b) /a]
2/3
X is a gaussian r.v. with zero mean and unit variance : pX(x) = 1√
Hence :
2π
e−x2/2
√
2π [(y − b) /a]
3a
1
2/3
e− 1
2 ( y−b
a )2/3
pdf of Y
a=2
b=3
pY (y) =
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
−10
−8
−6
−4
−2
0
y
2
4
6
8
10
Problem 2.5 :
(a) Since (Xr, Xi) are statistically independent :
pX(xr, xi) = pX(xr)pX(xi) =
1
2πσ2
2
e−(x2
r+x2
i )/2σ2
课后答案网 www.khdaw.com
J =
Hence, by (2-1-55) :
=
(b) Y = AX and X = A−1Y
(2πσ2)n/2 e−xx/2σ2
1
pY(yr, yi) = pX((Yr cos φ + Yi sin φ) , (−Yr sin φ + Yi cos φ))
1
2πσ2 e−(y2
r +y2
i )/2σ2
Also :
Yr + jYi = (Xr + Xi)ejφ ⇒
Xr + Xi = (Yr + jYi) e−jφ = Yr cos φ + Yi sin φ + j(−Yr sin φ + Yi cos φ) ⇒
The Jacobian of the above transformation is :
Xr = Yr cos φ + Yi sin φ
Xi = −Yr sin φ + Yi cos φ
∂Xr
∂Yr
∂Xr
∂Yi
∂Xi
∂Yr
∂Xi
∂Yi
=
cos φ − sin φ
cos φ
sin φ
= 1
Now, pX(x) =
M = σ2I, since they are i.i.d) and J = 1/| det(A)|. Hence :
(the covariance matrix M of the random variables x1, ..., xn is
pY(y) =
1
(2πσ2)n/2
1
| det(A)|e−y(A−1)A−1y/2σ2
For the pdf’s of X and Y to be identical we require that :
| det(A)| = 1 and (A−1)
A−1 = I =⇒ A−1 = A
Hence, A must be a unitary (orthogonal) matrix .
Problem 2.6 :
(a)
But,
ejvY
= E
ejv
n
i=1
xi
= E
ψY (jv) = E
ejvxi
=
n
i=1
ejvX
=
n
i=1
E
n
ψX(ejv)
pX(x) = pδ(x − 1) + (1 − p)δ(x) ⇒ ψX(ejv) = 1 + p + pejv
⇒ ψY (jv) =
1 + p + pejv
n
3
课后答案网 www.khdaw.com
(b)
and
E(Y ) = −j
dψY (jv)
dv
|v=0 = −jn(1 − p + pejv)n−1jpejv|v=0 = np
E(Y 2) = −d2ψY (jv)
d2v
|v=0 = − d
dv
jn(1 − p + pejv)n−1pejv
⇒ E(Y 2) = n2p2 + np(1 − p)
v=0
= np + np(n − 1)p
Problem 2.7 :
ψ(jv1, jv2, jv3, jv4) = E
ej(v1x1+v2x2+v3x3+v4x4)
E (X1X2X3X4) = (−j)4 ∂4ψ(jv1, jv2, jv3, jv4)
∂v1∂v2∂v3∂v4
|v1=v2=v3=v4=0
From (2-1-151) of the text, and the zero-mean property of the given rv’s :
ψ(jv) = e− 1
2
vMv
where v = [v1, v2, v3, v4]
We obtain the desired result by bringing the exponent to a scalar form and then performing
quadruple differentiation. We can simplify the procedure by noting that :
, M = [µij] .
∂ψ(jv)
∂vi
= −µ
ive− 1
2
vMv
where µ
i = [µi1, µi2, µi3, µi4] . Also note that :
∂µ
jv
∂vi
= µij = µji
Hence :
∂4ψ(jv1, jv2, jv3, jv4)
∂v1∂v2∂v3∂v4
|V=0 = µ12µ34 + µ23µ14 + µ24µ13
Problem 2.8 :
For the central chi-square with n degress of freedom :
ψ(jv) =
1
(1 − j2vσ2)
n/2
4
课后答案网 www.khdaw.com
Now :
d2ψ(jv)
dψ(jv)
dv =
jnσ2
(1 − j2vσ2)
−2nσ4 (n/2 + 1)
(1 − j2vσ2)
n/2+2
dv2 =
Y = E (Y 2) − [E (Y )]
2
n/2+1
⇒ E
dψ(jv)
⇒ E (Y ) = −j
dv
= −d2ψ(jv)
Y 2
dv2
|v=0 = nσ2
|v=0 = n(n + 2)σ2
The variance is σ2
For the non-central chi-square with n degrees of freedom :
= 2nσ4
ψ(jv) =
1
(1 − j2vσ2)
n/2
ejvs2/(1−j2vσ2)
n
where by definition : s2 =
i=1 m2
i .
dψ(jv)
dv =
Hence, E (Y ) = −j dψ(jv)
jnσ2
(1 − j2vσ2)
|v=0 = nσ2 + s2
n/2+1 +
ejvs2/(1−j2vσ2)
js2
(1 − j2vσ2)
n/2+2
dv
−nσ4 (n + 2)
(1 − j2vσ2)
E
Y 2
d2ψ(jv)
dv2 =
Hence,
and
n/2+2 +
−s2(n + 4)σ2 − ns2σ2
(1 − j2vσ2)
n/2+3 +
−s4
(1 − j2vσ2)
= −d2ψ(jv)
dv2
σ2
Y = E
Y 2
|v=0 = 2nσ4 + 4s2σ2 +
nσ2 + s2
− [E (Y )]
2
= 2nσ4 + 4σ2s2
n/2+4
ejvs2/(1−j2vσ2)
Problem 2.9 :
The Cauchy r.v. has : p(x) = a/π
x2+a2 ,−∞ < x < ∞ (a)
since p(x) is an even function.
∞
Note that for large x,
E (X) =
xp(x)dx = 0
∞
x2
a
π
dx
x2 + a2
x2
E
=
X 2
−∞
−∞
x2+a2 → 1 (i.e non-zero value). Hence,
= ∞, σ2 = ∞
x2p(x)dx =
X 2
E
∞
−∞
5
课后答案网 www.khdaw.com
(b)
jvX
=
∞
a/π
−∞
x2 + a2
∞
−∞
ejvxdx =
ψ(jv) = E
a/π
(x + ja) (x − ja)
ejvxdx
This integral can be evaluated by using the residue theorem in complex variable theory. Then,
for v ≥ 0 :
ψ(jv) = 2πj
ψ(jv) = −2πj
x=ja
a/π
x + ja
ejvx
= e−av
a/π
x − ja
ejvx
= eavv
x=−ja
For v < 0 :
Therefore :
ψ(jv) = e−a|v|
Note: an alternative way to find the characteristic function is to use the Fourier transform
relationship between p(x), ψ(jv) and the Fourier pair :
e−b|t| ↔ 1
π
c
c2 + f 2
, c = b/2π, f = 2πv
Problem 2.10 :
(a) Y = 1
n
n
i=1 Xi, ψXi(jv) = e−a|v|
n
ψY (jv) = E
ejv 1
n
Xi
i=1
=
n
i=1
E
ej v
n
Xi
=
n
i=1
e−a|v|/n
n
= e−a|v|
ψXi(jv/n) =
(b) Since ψY (jv) = ψXi(jv) ⇒ pY (y) = pXi(xi) ⇒ pY (y) =
(c) As n → ∞, pY (y) =
not hold. The reason is that the Cauchy distribution does not have a finite variance.
y2+a2 , which is not Gaussian ; hence, the central limit theorem does
y2+a2 .
a/π
a/π
Problem 2.11 :
We assume that x(t), y(t), z(t) are real-valued stochastic processes. The treatment of complex-
valued processes is similar.
(a)
φzz(τ ) = E {[x(t + τ ) + y(t + τ )] [x(t) + y(t)]} = φxx(τ ) + φxy(τ ) + φyx(τ ) + φyy(τ )
6
课后答案网 www.khdaw.com
(b) When x(t), y(t) are uncorrelated :
φxy(τ ) = E [x(t + τ )y(t)] = E [x(t + τ )] E [y(t)] = mxmy
Similarly :
Hence :
φyx(τ ) = mxmy
φzz(τ ) = φxx(τ ) + φyy(τ ) + 2mxmy
(c) When x(t), y(t) are uncorrelated and have zero means :
φzz(τ ) = φxx(τ ) + φyy(τ )
Problem 2.12 :
The power spectral density of the random process x(t) is :
∞
−∞
Φxx(f ) =
φxx(τ )e−j2πf τ dτ = N0/2.
The power spectral density at the output of the filter will be :
Hence, the total power at the output of the filter will be :
φyy(τ = 0) =
|H(f )|2
N0
2
Φyy(f ) = Φxx(f )|H(f )|2 =
∞
−∞ Φyy(f )df =
∞
−∞
N0
2
|H(f )|2df =
N0
2
(2B) = N0B
Problem 2.13 :
MX = E [(X − mx)(X − mx)
] , X =
, mx is the corresponding vector of mean values.
X1
X2
X3
7
课后答案网 www.khdaw.com
Then :
Hence :
MY = E [(Y − my)(Y − my)
]
= E [A(X − mx)(A(X − mx))
= E [A(X − mx)(X − mx)
A
]
= AE [(X − mx)(X − mx)
] A
= AMxA
]
µ11
0
µ11 + µ31
0
4µ22
0
µ11 + µ13
0
µ11 + µ13 + µ31 + µ33
MY =
Problem 2.14 :
Y (t) = X 2(t), φxx(τ ) = E [x(t + τ )x(t)]
φyy(τ ) = E [y(t + τ )y(t)] = E
x2(t + τ )x2(t)
Let X1 = X2 = x(t), X3 = X4 = x(t + τ ). Then, from problem 2.7 :
E (X1X2X3X4) = E (X1X2) E (X3X4) + E (X1X3) E (X2X4) + E (X1X4) E (X2X3)
Hence :
φyy(τ ) = φ2
xx(0) + 2φ2
xx(τ )
Problem 2.15 :
m
m
Ω
pR(r) = 2
Γ(m)
√
We know that : pX(x) = 1
1/
Hence :
r2m−1e−mr2/Ω, X = 1√
.
m
pR
1/
Ω
√
x
Ω
R
Ω
pX(x) =
√
1
Ω
1/
2
m
Γ(m)
Ω
√
x
2m−1
Ω
√
e−m(x
Ω)2/Ω =
mmx2m−1e−mx2
2
Γ(m)
Problem 2.16 :
The transfer function of the filter is :
H(f ) =
1/jωC
R + 1/jωC =
1
jωRC + 1
=
1
j2πf RC + 1
8
课后答案网 www.khdaw.com