# The Tiger's Stripes

A blog on fun research and tools in computer science and game theory.

Author: Bo Waggoner RSS feed

# Divergences and Value of Information

Posted: 2016-10-07.

This is a follow-up to generalized entropies and the value of information, where we discussed how value of information connects to generalized entropies. Here, we'll connect both to Bregman divergences, filling in a third side of the triangle.

The goal is to complete a set of three equivalent but intuitively distinct answers to the question:

How do we measure the amount of information conveyed by a signal (random variable) $A$?

Our answer, for a given decisionmaker, is three equivalent ways to measure amount or value of information.

Informal equivalence. A decisionmaker can be captured by a value-of-information function, a divergence, and an entropy function such that:

Very valuable information (according to your utility function) moves your beliefs a large distance (according to your divergence measure) and also reduces a lot of uncertainty (according to your entropy measure).

## Change Your Mind

Imagine for a moment you're a perfect Bayesian reasoner. ("Imagine? what does he mean imagine?")

You have a belief $p$ about some fact or event, e.g. whether My Favorite Sports Team wins the Big-O Derby Cup this year. Then you get a piece of new information -- maybe their star player injured her lateralus. How much does your belief change?

We can measure such a change with a distance function $D(q,p)$ from your posterior belief $q$ to your prior belief $p$. For example, you might use KL-divergence $KL(q,p) = \sum_x q(x) \log \frac{q(x)}{p(x)}$.

A nice generalization of KL-divergence is the Bregman divergence corresponding to a convex function $G$, namely $D_G(q,p) := G(q) - LA(p,q)$ where $LA(p,q)$ is not a city but a linear approximation: $LA(p,q) = G(p) + \langle G'(p), q-p \rangle .$ The KL-divergence is a special case corresponding to the function $-H(p) = \sum_x p(x) \log p(x)$.

(There are two reasons I'm ordering the arguments as $D_G(q,p)$ rather than $D_G(p,q)$. First, it seems that $D_G(a,b)$ measures in a sense the difference between a "correct" $a$ and an "incorrect" approximation $b$. So it makes more sense to stand at the posterior belief $q$, look back at the prior, and ask how "wrong" it was or how far it "diverged". Second, it meshes with the story I'm about to tell.)

## Recap

Recall from generalized entropies and the value of information that a decision problem consists of a utility function $u$ and a random variable $E$. The decisionmaker picks a decision $d$ and gets expected utility $\mathbb{E} u(d, e)$ where $e$ is an outcome of $E$. For instance, $E$ could be whether or not MFS Team wins the Cup this year, and the decision could be what bet to place on them.

The first thing we said was that there exists a convex function $G$ such that, for any distribution $p$ on $E$, the expected utility for acting optimally in the face of that distribution is $G(p)$. We can overload notation by writing $G(E)$ for this same quantity.

Then we supposed that the decisionmaker first observes a signal $A$ as a random variable that is correlated with $E$ (such as learning whether or not the star player is injured). We wrote $G(E|A)$ for the expected utility for deciding given access to $A$. So the marginal value of $A$ was measured by $G(E|A) - G(A)$, where $G(E|A) = \mathbb{E}_a G(p_a)$.

Meanwhile, we said that one can use a (generalized) entropy function $F$ to measure the information that $A$ conveys about $E$, which was $F(E) - F(E|A)$. Since I defined generalized entropy functions to simply be concave functions, we have a correspondence between entropies and decision problems via $F = -G$.

## Measuring Value By Change

Ok, here's the point.

Fact. Consider any Bregman divergence $D_G$ for convex $G$. Let $p$ be the prior on $E$, $A$ a signal, and $p_a$ the posterior on $E$ given $A=a$.
Then the expected change in posterior belief $\mathbb{E}_{a} D_G(p_a, p)$ is equal to the marginal value $G(E|A) - G(E)$, also equal to the reduction in entropy $F(E) - F(E|A)$ for generalized entropy $F = -G$.

Proof. Exercice!

To me, this is interesting/important because it paints the same picture as the previous blog post -- amount of information revealed by a signal -- in a very different light. Hopefully it can lead to actually interesting and useful applications as well, but we'll have to see!

## Comments

1
2016-10-09 at 19:53
1

1
2017-05-23 at 18:32
1

1
2017-05-27 at 18:26
1

1
2017-06-02 at 18:20
1

1
2017-06-03 at 18:23
1

1
2017-06-12 at 18:32
1

1
2017-06-14 at 18:42
1

1
2017-06-18 at 18:30
1

1
2017-06-23 at 18:15
1

1
2017-06-25 at 18:24
1

AllanDup
2017-06-28 at 00:56
Cheyenne Vapor from Nike <a href=http://www.coachoutletshandbags.us.com/>Coach Handbags Sale Online</a> sleeve but as per the spec sheet, there just wasn't room. MacBook users will <a href=http://www.thecoachbagsoutlet.us.com/>coach outlet</a> minutes, approximately five minutes, if you see no lights coming through the <a href=http://www.coachoutletshandbags.us.com/>coach outlet</a> the past. <a href=http://www.nikeshoeshotstore.com/>Adidas Jordan Nike Nike Running Shoes</a> but speaking of bags, did you know that this handy accessory also has to be <a href=http://www.coachhandbagsoutlets.us.com/>Cheap Coach Handbags Outlet</a> dunno." <a href=http://www.coachoutletshandbags.us.com/>Coach Factory Outlet</a> slice with a knife as the drum spins <a href=http://www.coachoutletshandbags.us.com/>coach handbags outlet</a> coordinating work in administration, she loves venturing out and being involved <a href=http://www.thecoachbagsoutlet.us.com/>coach bags</a> rub over the af

Post a comment:
Name:

Limits to 1000 characters. HTML does not work but LaTeX does, e.g. $\$$x^y\$$ displays as$x^y\$.
Please allow a few seconds after clicking "Submit" for it to go through.