Let X_1,X_2,.... be some random variables converging to X in probability, and which are uniformly integrable.

Inituitively, the convergence in probability says that they converge nicely `most of the time’ and the uniform integrability says that where they don’t converge nicely, they don’t converge too nastily. This is the intuitive reason why we can deduce that these two conditions suffice for convergence in L^1, and I attach what I hope is an intuitive proof (though it differs wildly from that in the official notes).

Let \epsilon >0.

Firstly, fix \delta >0 such that for every n, \int_B |X_n - X| \leq \int_B |X_n| + \int_B |X| < \frac{\epsilon}{2} whenever \mathbb{P}(B) < \delta (using uniform integrability).

Secondly, fix N such that whenever n\geq N, \mathbb{P}(|X_n - X| \geq \frac{\epsilon}{2}) < \delta (possible by convergence in probability.

We deduce that for all n \geq N, \mathbb{E}|X_n - X| = \int_{\{ |X_n - X|<\frac{\epsilon}{2} \}} |X_n - X| + \int_{\{ |X_n - X| \geq \frac{\epsilon}{2} \}} |X_n - X| < \epsilon.

So we are done, for exactly the reasons we expected. In both official sets of course notes I’ve been using, the proof takes an entirely different and what feels to me much more obscure route of picking an a.s. convergent subsequence, using Fatou’s lemma and another lemma about uniform integrability which seem to be just missing the point, but if there is a flaw in my above proof I’d be glad to know about it.

EDIT: Thanks to Spencer for pointing out that the flaw was assuming that X is uniformly integrable. Fortunately, this can be proved relatively cleanly, and essentially the proof I want is documented well in Martin Orr’s notes, also posted below. Thanks for the feedback guys.

Advertisements