# Intro to Measure Concentration

*Posted: 2017-10-07.*

This post will introduce the concept of "tail bounds" or "measure concentration" and cover the basics of Markov's, and Chebyshev's, and Chernoff-type bounds and how they are proven.

## Random, but not Unpredictable

The idea is simple: you've got a random variable variable $Y$, and you'd like to make an accuracte prediction about $Y$. Sometimes, just by knowing some simple guarantees about $Y$, you can make a surprisingly accurate prediction; the likely outcomes "concentrate" in a relatively small region. Of course, the more stringent the guarantees, the better or "stronger" the predictions we can make.

## Markov's Inequality

Here's a pair of very simple guarantees: (1) nonnegativity, i.e. $Y \geq 0$; and (2) finite expectation, i.e. $\mathbb{E} Y < \infty$.
Intuitively, these suffice to tell us that $Y$ cannot be *too* large too much of the time; if it were, its expectation would be higher.
(Why do I need nonnegativity to make this argument?)
Markov's inequality formalizes this.

**Theorem (Markov's).** Let $Y$ be nonnegative; then for all $t > 0$,
$$ \Pr[Y \geq t] \leq \frac{\mathbb{E} Y}{t} . $$

*Proof.* Consider this random variable: $X = 0$ if $Y < t$, otherwise $X = t$.
Naturally, $X \leq Y$ always, so
\begin{align*}
\mathbb{E} Y
&\geq \mathbb{E} X \\
&= t \Pr[ Y \geq t],
\end{align*}
which rearranges to give the theorem.

Notice that Markov's inequality will be tight (i.e. cannot be improved) if $Y$ actually is $X$, that is, it is zero with some probability, or else $t$, and we happen to be looking for a bound for the chance that $Y$ exceeds $t$.

Most distributions don't look like this, so Markov's is usually very loose. A key idea below will be to transform a given distribution so it looks a bit more like $X$, apply Markov's inequality, then interpret what this says about the original distribution. For example, $\Pr[ Y \geq t ] = \Pr[ Y/t \geq 1 ]$, so it would suffice to consider this "scaled-down" version of $Y$.

## Chebyshev's Inequality

A lot of relatively simple but effective "tail bounds" come from a clever use of Markov's inequality. The first one is Chebyshev's. Suppose the "simple fact" we know about $Y$ is that its variance is finite. Then we know $Y$ can't be too far from its mean:

**Theorem (Chebyshev's).** Let $Y$ be arbitrary; then for all $t > 0$, $\Pr[|Y - \mathbb{E}Y| \geq t] \leq \frac{\text{Var}(Y)}{t^2}$.

*Proof.* Let the random variable $Z = |Y - \mathbb{E}Y|$.
Then we have
\begin{align*}
\Pr[ Z \geq t ]
&= \Pr[ Z^2 \geq t^2 ] \\
&\leq \frac{\mathbb{E} Z^2}{t^2} & \text{(Markov's).}
\end{align*}
This says
\[ \Pr[ |Y - \mathbb{E}Y| \geq t] \leq \frac{\mathbb{E} (Y - \mathbb{E}Y)^2}{t^2} . \]
Of course, the expectation in the numerator is the variance of $Y$.

Naturally, we can generalize this approach to higher moments. Notice that the assumption that the fourth central moment is finite, for example, is stronger than the assumption that the mean is finite or that the variance is. The better bounds will only apply to distributions satisfying stronger assumptions.

## "Chernoff" Bounds

Now we consider a much stronger assumption, namely that $\mathbb{E}e^Y$ is finite. (This is very similar to assuming that all moments exist.) More usefully, we would like to suppose that $\mathbb{E}e^{\theta Y}$ is finite for all $\theta \geq 0$ or all $\theta$ in a given range.

Note that $\mathbb{E} e^Y = \int e^y f(y) dy$ where $f$ is the density function of the distribution (if it exists), so intuitively, for this to be finite, the density must drop at least exponentially in the tails. See also moment generating functions.

In this case, informally, we have \begin{align*} \Pr[ Y \geq t ] &= \Pr[ e^{\theta Y} \geq e^{\theta t} ] \\ &\leq \frac{\mathbb{E} e^{\theta Y}} {e^{\theta t}} \end{align*} using Markov's inequality. (Why couldn't $\theta$ be negative?) This is the basis of what computer scientists call the "Chernoff bound" approach. It often gives a very tight bound if we can control $\mathbb{E} e^{\theta Y}$. To see why the "loose" Markov's can be so tight when applied in this way, consider the variable $Z = e^{\theta Y} / e^{\theta t}$. $Z$ is essentially zero for all small values of $Y$, then for $Y$ in the neighborhood of $t$, $Z$ suddenly becomes noticeable. If $Y$ has tails that drop off quickly after $t$, then $Z$'s distribution looks just like the kind for which Markov's actually gives good bounds.

[Side note ahead of time: our proofs will use some useful bounds: $1+x \leq e^x$ (for all $x$) and $x - \frac{x^2}{2} \leq \ln(1+x)$ (for all $x \geq 0$).]

**Lemma.** Let $X$ be Bernoulli$(p)$; then for any $\theta \gt 0$, $\mathbb{E} e^{\theta X} \leq e^{p(e^{\theta}-1)}$.

*Proof.*
\begin{align*}
\mathbb{E} e^{\theta X}
&= p e^{\theta} + (1-p) \\
&= 1 + p(e^{\theta} - 1) \\
&\leq e^{p(e^{\theta} - 1)} .
\end{align*}

A main useful feature of this approach is that if $Y = X_1 + X_2$, then $e^Y = e^{X_1}e^{X_2}$, and if $X_1$ and $X_2$ are furthermore *independent*, then $\mathbb{E}e^Y = \left(\mathbb{E}e^{X_1}\right) \left(\mathbb{E}e^{X_2}\right)$.

**Theorem (example "Chernoff" bound for sums of Bernoullis).** Let $Y = \sum_{i=1}^n X_i$ with each $X_i$ distributed Bernoulli$(p_i)$ and all mutually independent.
Let $\mu = \sum_{i=1}^n p_i$.
Then for $t \geq \mu$,
$$ \Pr[ Y \geq t ] \leq e^{-\frac{(t-\mu)^2}{2t}} . $$

*Proof.*
\begin{align*}
\Pr[ Y \geq t ]
&= \Pr[ e^{\theta Y} \geq e^{\theta t} ] \\
&\leq \frac{\mathbb{E} e^{\theta Y}}{e^{\theta t}} & \text{(Markov's)} \\
&= \frac{1}{e^{\theta t}} \prod_{i=1}^n \mathbb{E} e^{\theta X_i} \\
&\leq \frac{1}{e^{\theta t}} \prod_{i=1}^n e^{p_i(e^{\theta}-1)} & \text{(Lemma)} \\
&= \frac{1}{e^{\theta t}} e^{\sum_i p_i(e^{\theta}-1)} \\
&= \frac{1}{e^{\theta t}} e^{\mu (e^{\theta}-1)} .
\end{align*}
Note that this holds for all $\theta \gt 0$, so we can choose any such $\theta$ and get a valid bound.
In particular $\theta = \ln(1+c)$ is convenient, for $c$ to be chosen momentarily, so we get (using that for positive $c$, $\ln(1+c) \geq c - \frac{c^2}{2}$)
\begin{align*}
\Pr[ Y \geq t ]
&\leq e^{\mu c - t\ln(1+c)} \\
&\leq e^{\mu c - t(c - c^2/2)} \\
&= e^{-c(t-\mu) + tc^2/2} .
\end{align*}
Picking $c = (t-\mu)^2/t$ (by the magic of guess-and-check),
\begin{align*}
\Pr[ Y \geq t ]
&\leq e^{-(t-\mu)^2/2t} .
\end{align*}

For a slightly more intuitive bound, for example, if we assume $t \leq 2\mu$, then we can simplify to \[ \Pr[ Y \geq t ] \leq e^{-(t-\mu)^2 / 4\mu} \] (however note better bounds are available). The point I want to make is that this becomes useful for $t \geq \mu + 2\sqrt{\mu}$, and then rapidly becomes much more useful, for example, for $t = \mu + 2k\sqrt{\mu}$, the probability dwindles to $e^{-k^2}$. This should make sense because the standard deviation of $Y$ is roughly $\sqrt{\mu}$. (This tail behavior is a good fact to internalize about Binomials.)

You can find more Chernoff bounds on their Wikipedia page. That's it for this post, but measure concentration is a big topic with room for many future posts -- see the books in the references in the meantime.

## References

(2005) Michael Mitzenmacher and Eli Upfal. Probability and computing: Randomized algorithms and probabilistic analysis.(2013) Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: nonasymptotic theory of independence.

## Comments