$\def\NN{\mathbb{N}}$$\def\RR{\mathbb{R}}$$\def\eps{\epsilon}$$\def\calT{\mathcal{T}}$$\def\calP{\mathcal{P}}$$\def\calL{\mathcal{L}}$$\def\calR{\mathcal{R}}$$\def\calC{\mathcal{C}}$$\def\calF{\mathcal{F}}$$\def\calB{\mathcal{B}}$$\def\calS{\mathcal{S}}$$\def\calA{\mathcal{A}}$$\def\calG{\mathcal{G}}$$\newcommand{\inner}[2]{\langle#1, #2\rangle}$$\newcommand{\abs}[1]{\left\vert#1\right\vert}$$\newcommand{\norm}[1]{\left\Vert#1\right\Vert}$$\newcommand{\paren}[1]{\left(#1\right)}$$\newcommand{\sqbracket}[1]{\left[#1\right]}$$\def\var{\text{Var}}$$\def\cov{\text{Cov}}$$\newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}$$\newcommand{\doublepd}[3]{\frac{\partial^2 #1}{\partial #2 \partial #3}}$
Definition. Let $(A_n)_{n=1}^\infty$ be a sequence of sets. Define
$$\limsup_{n \to \infty} A_n := \bigcap_k \bigcup_{n \ge k} A_n$$
and
$$\liminf_{n \to \infty} A_n := \bigcup_k \bigcap_{n \ge k} A_n.$$
Proposition. For a sequence $(A_n)_{n=1}^\infty$ of sets,
$$\limsup A_n = \{\omega : \omega \in A_n \ \text{for infinitely many} \ n\}$$
and
$$\liminf A_n = \{\omega : \omega \in A_n \ \text{for all but finitely many} \ n\}.$$
Proof.
$$\begin{align*} \omega \in \limsup A_n &\iff \omega \in \bigcup_{n \ge k} A_n \ \text{for all} \ k \\ &\iff \text{For each} \ k, \ \text{there exists} \ n_k \ge k \ \text{such that} \ \omega \in A_{n_k}\\ &\iff \omega \in A_n \ \text{for infinitely many} \ n.\\\\ \omega \in \liminf A_n &\iff \omega \in \bigcap_{n \ge k} A_n \ \text{for some} \ k\\ &\iff \text{For some} \ k, \ \omega \in A_n \ \text{for all} \ n \ge k\\ &\iff \omega \in A_n \ \text{for all but finitely many} \ n. \end{align*}$$
$\square$
Theorem. Let $(A_n)_{n=1}^\infty$ be a sequence of events.
(a) (The first Borel-Cantelli lemma.) If $\sum_n P(A_n) < \infty$, then $P(\limsup A_n) = 0$.
(b) (The second Borel-Cantelli lemma.) If $\sum_n P(A_n) = \infty$ and $A_n$'s are pairwise independent, then $P(\limsup A_n) = 1$.
Proof.
(a) Let $Z := \sum_{n=1}^\infty I_{A_n}$. By the MCT, $EZ = \sum_n P(A_n) < \infty$, so $P(Z = \infty) = 0$. Since $\limsup A_n = \{Z = \infty\}$, the proof is done.
(b) We may assume $P(A_1) > 0$. Let
$$Z_n := \sum_{i=1}^n I_{A_i}, \quad J_n := \frac{Z_n}{EZ_n}.$$
Note that $EJ_n = 1$ for all $n$. By the pairwise independence of $(A_n)_{n=1}^\infty$ and $EZ_n \to \infty$,
$$\var(J_n) = \frac{\sum_{i=1}^n \var(A_i) }{(EZ_n)^2} =\frac{\sum_{i=1}^n P(A_i)(1-P(A_i))}{(EZ_n)^2} \le \frac{1}{EZ_n} \xrightarrow{n \to \infty} 0.$$
By Chebyshev's inequality, for each $\eps > 0$
$$P(\abs{J_n - 1} \ge \eps) \le \frac{\var(J_n)}{\eps^2} \xrightarrow{n \to \infty} 0$$
so $J_n \xrightarrow{p} 1$. Thus there exists a subsequence $(J_{n_k})_{k=1}^\infty$ converging to 1 a.e. Now $Z_{n_k} \to \infty$ a.e. $(Z_n)_{n=1}^\infty$ is nondecreasing at each point, so $Z_n \to \infty$ a.e. This implies that $P(\limsup A_n) = 1$. $\square$
Proposition. Let $(X_n)_{n=1}^\infty$ be a sequence of random variables.
(a) If $\sum_n P(\abs{X_n} < \eps) < \infty$ for each $\eps > 0$, then $P(\lim X_n = 0) = 1$.
(b) If $(X_n)_{n=1}^\infty$ is pairwise independent and $P(\lim X_n = 0) = 1$, then $\sum_n P(\abs{X_n} < \eps) < \infty$ for each $\eps > 0$.
Proof.
(a) Fix $\eps > 0$. Let $A_n := \{\abs{X_n} > \eps\}$. By the first Borel-Cantelli lemma, $P(\limsup A_n) = 0$. Let $B_\eps := (\limsup A_n)^C$. Then $\omega \in B_\eps$ iff there exists some $N$ such that $\abs{X_n(\omega)} \le \eps$ for all $n \ge N$. Now let $B := \bigcap_{r \ge 1} B_{1/r}$. Then $B = \{\lim X_n = 0\}$. Also $P(B^C) \le \sum_{r \ge 1} P(B_{1/r}^C) = 0$ so $P(B) = 1$.
(b) Suppose $\sum_n P(\abs{X_n} > \eps_0) = \infty$ for some $\eps_0 > 0$. For each $n$, let $A_n := \{\abs{X_n} > \eps_0\}$. Then $A_n$'s are pairwise independent so by the second Borel-Cantelli lemma, $P(\limsup A_n) = 1$. Note that $\omega \in \limsup A_n$ iff $\abs{X_n(\omega)} > \eps_0$ for infinitely many $n$ iff $\limsup \abs{X_n(\omega)} \ge \eps_0$. It contradicts that $P(\lim X_n = 0) = 0$. $\square$
Definition. Let $(X_n)_{n=1}^\infty$ be a sequence of random variables. The tail $\sigma$-algebra of $(X_n)_{n=1}^\infty$ is defined as
$$\calT := \bigcap_{n=1}^\infty \sigma(\{X_k\}_{k \ge n}).$$
An event in $\calT$ is called a tail event. Any $\calT$-measurable random variable is called a tail random variable.
$\{\limsup X_n \le c\}$ or $\{\lim X_n = c\}$ are some examples of tail events.
Theorem. (Kolmogorov's 0-1 law). Let $(X_n)_{n=1}^\infty$ be a sequence of independent random variables and $\calT$ be its tail $\sigma$-algebra. Then for each $A \in \calT$, $P(A)=0$ or $P(A)=1$.
Proof. For each $n$, let
$$\calF_n := \sigma(\{X_1, \ldots, X_n\}), \quad \calT_n := \sigma(\{X_{n+1}, X_{n+2}, \ldots\}).$$
Then for each $n$, $\calF_n$ is independent of $\calT_n$. The tail $\sigma$-algebra $\calT$ is equal to $\bigcap_n \calT_n$, so each $\calF_n$ is independent of $\calT$. Let $\calA := \bigcup_n \calF_n$, then $\calA$ is independent of $\calT$. One can check that $\calA$ is a $\pi$-system, so $\sigma(\calA)$ is independent of $\calT$. Also,
$$\calT \subseteq \sigma(\{X_n\}_{n \ge 1}) = \sigma(\calA)$$
so $\calT$ is independent of itself. Therefore, for any $A \in \calT$, $P(A \cap A) = P(A)P(A)$ so $P(A) = 0$ or $P(A)=1$. $\square$
Definition. An $\calF$-measurable mapping $X : \Omega \to \overline{\RR}$ is called an extended real-valued random variable or $\overline{\RR}$-valued random variable.
Corollary. Let $\calT$ be the tail $\sigma$-algebra of a sequence $(X_n)_{n=1}^\infty$ of independent random variables. Let $X$ be a $\calT$-measurable $\overline{\RR}$-valued random variable. Then there exists $c \in \overline{\RR}$ such that $P(X = c) = 1$.
Proof. If $P(X \le x) = 0$ for all $x \in \RR$, then letting $c = \infty$ completes the proof. Now suppose $P(X \le x) \ne 0$ for some $x \in \RR$. Define
$$B := \{x \in \RR : P(X \le x) \ne 0 \}$$
which is nonempty. For each $x \in B$, $\{X \le x\}$ is a tail event so by Kolmogorov's 0-1 law, $P(X \le x) = 1$. Now let $c := \inf B$. There exists a sequence $(x_n)_{n =1}^\infty$ in $B$ such that $x_n \searrow c$; so $P(X \le c) = \lim_{n \to \infty} P(X \le x_n) = 1$. $\square$
Definition. $(X_n)_{n=1}^\infty$ converges to $X$ in probability, denoted $X_n \xrightarrow{p} X$, if for each $\eps > 0$,
$$\lim_{n \to \infty} P(\abs{X_n - X} > \eps) = 0.$$
Theorem. If $X_n \xrightarrow{p} X$, then there exists a subsequence $(X_{n_k})_{k=1}^\infty$ such that $X_{n_k} \to X$ a.e.
Proof. We can take an increasing sequence $(n_k)_{n=1}^\infty$ such that
$$P(\abs{X_n - X} > 2^{-k}) > 2^{-k}$$
for each $k$. Let $A_k := \{\abs{X_{n_k} - X} > 2^{-k}\}$. Let $Z := \sum_k I_{A_k}$. By the MCT, $EZ = \sum_{k} P(A_k) < \infty$ so $Z < \infty$ a.e. If $Z(\omega) < \infty$, then $\omega \in A_k$ for at most finitely many $k$, so $\abs{X_{n_k}(\omega) - X(\omega)} \le 2^{-k}$ for sufficiently large $k$. Therefore, $X_{n_k} \to X$ a.e. $\square$
'수학 > 확률론' 카테고리의 다른 글
[Athreya] 8.1. Weak Laws of Large Numbers (0) | 2022.08.16 |
---|---|
[Athreya] 7.1. Independent Events and Random Variables (0) | 2022.08.08 |
[Athreya] 6.3. Kolmogorov's consistency theorem (0) | 2022.08.06 |
[Athreya] 6.2. Random Variables and Random Vectors (0) | 2022.08.04 |