<rss version="2.0">
<channel>
<title>The Tiger's Stripes</title>
<link>https://www.bowaggoner.com/blog/</link>
<description>A technical blog on math, computer science, and game theory.</description>

<language>en</language>
<pubDate>2020-08-01</pubDate>
<item>
  <title>Formulations of Max Flow</title>
  <link>https://www.bowaggoner.com/blog/2020/08-01-formulations-of-max-flow/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2020/08-01-formulations-of-max-flow/index.html</guid>
  <description>There are a couple ways to teach max flow and the Ford-Fullkerson algorithmic framework.
While the high-level ideas are similar, the differences can be confusing if you haven't seen both.
They're also interesting from a philosophical, pedagogical, and practical perspective!
This post briefly reviews the approaches along with a variant of my own.</description>
  <pubDate>2020-08-01</pubDate>
</item>
<item>
  <title>Sub-Gamma Variables and Concentration</title>
  <link>https://www.bowaggoner.com/blog/2019/02-03-sub-gamma-concentration/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2019/02-03-sub-gamma-concentration/index.html</guid>
  <description>For variables that are usually small, subgaussian concentration bounds can often be loose. We can get better Bernstein bounds using the &#39;sub-gamma&#39; approach.</description>
  <pubDate>2019-02-03</pubDate>
</item>
<item>
  <title>Measure Concentration II</title>
  <link>https://www.bowaggoner.com/blog/2018/09-29-measure-concentration-ii/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/09-29-measure-concentration-ii/index.html</guid>
  <description>This post will discuss how to get some key tail bounds from subgaussian assumptions, then introduce the wonderful world of martingale inequalities. This is a follow-up to posts on &lt;a href=&#39;https://www.bowaggoner.com/blog/2017/10-07-measure-concentration/&#39;&gt;measure concentration&lt;/a&gt; and &lt;a href=&#39;https://bowaggoner.com/blog/2017/12-18-subgaussianity/&#39;&gt;subgaussian variables&lt;/a&gt;.</description>
  <pubDate>2018-09-29</pubDate>
</item>
<item>
  <title>Prophet Inequalities</title>
  <link>https://www.bowaggoner.com/blog/2018/08-25-prophet-inequalities/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/08-25-prophet-inequalities/index.html</guid>
  <description>So-called &#39;Prophet Inequalities&#39; are cute mathematical stopping-time theorems with interesting applications in mechanism design.
The problem is to choose when to stop and claim an arriving random variable, performing almost as well as if you had prophetic foresight.</description>
  <pubDate>2018-08-25</pubDate>
</item>
<item>
  <title>Prediction Markets II</title>
  <link>https://www.bowaggoner.com/blog/2018/08-08-prediction-markets-ii/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/08-08-prediction-markets-ii/index.html</guid>
  <description>In this follow-up on &lt;a href=&#39;../../2017/10-03-prediction-markets/&#39;&gt;an introduction to prediction markets&lt;/a&gt;, they are back and badder than ever with a more formal and general mathematical approach.</description>
  <pubDate>2018-08-08</pubDate>
</item>
<item>
  <title>Eliciting Means</title>
  <link>https://www.bowaggoner.com/blog/2018/08-02-eliciting-means/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/08-02-eliciting-means/index.html</guid>
  <description>It turns out that &lt;em&gt;Bregman divergences&lt;/em&gt; generally characterize scoring rules to elicit an agent's expectation of a random variable.
This fact is important in machine learning and also has nice connections to the proper scoring rule characterization.
This is a continuation of the &lt;a href=&#39;http://www.bowaggoner.com/blog/series.html#convexity-elicitation&#39;&gt;series on elicitation&lt;/a&gt;.</description>
  <pubDate>2018-08-02</pubDate>
</item>
<item>
  <title>Weitzman's Pandora's Box Problem</title>
  <link>https://www.bowaggoner.com/blog/2018/07-20-pandoras-box/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/07-20-pandoras-box/index.html</guid>
  <description>The &#39;Pandora's Box&#39; problem is a cool model of search for the best alternative under uncertainty with a neat and intuitive solution.
This post describes a somewhat different proof and intuition with a nice extension to matching.</description>
  <pubDate>2018-07-20</pubDate>
</item>
<item>
  <title>Hybrid Auction-Prediction Mechanisms</title>
  <link>https://www.bowaggoner.com/blog/2018/06-23-hybrid-auction-prediction-mechanisms/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/06-23-hybrid-auction-prediction-mechanisms/index.html</guid>
  <description>This post describes a mechanism for making a group decision (along with monetary transfers) based both on people's preferences and their predictions.</description>
  <pubDate>2018-06-23</pubDate>
</item>
<item>
  <title>The VCG Mechanism</title>
  <link>https://www.bowaggoner.com/blog/2018/06-22-vcg/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/06-22-vcg/index.html</guid>
  <description>This post reviews the famous Vickrey-Clarke-Groves mechanism: the canonical truthfulness-inducing auction (procedure involving monetary transfers) for making a group decision.</description>
  <pubDate>2018-06-22</pubDate>
</item>
<item>
  <title>Tight Bounds for Gaussian Tails and Hazard Rates</title>
  <link>https://www.bowaggoner.com/blog/2018/03-17-gaussian-tails/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/03-17-gaussian-tails/index.html</guid>
  <description>This post will show how to tightly bound the tail probabilities of the Gaussian distribution from both sides with a closed form.</description>
  <pubDate>2018-03-17</pubDate>
</item>
<item>
  <title>Eliciting Continuous Scalars</title>
  <link>https://www.bowaggoner.com/blog/2018/03-16-eliciting-continuous-scalars/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2018/03-16-eliciting-continuous-scalars/index.html</guid>
  <description>Continuing the &lt;a href=&#39;https://www.bowaggoner.com/blog/series.html#convexity-elicitation&#39;.&gt; series on elicitation&lt;/a&gt;, we'll take a look at a special class of properties or statistics of distributions: those that are scalar real numbers (as opposed to vectors) and whose value is a continuous function of the distribution.</description>
  <pubDate>2018-03-16</pubDate>
</item>
<item>
  <title>Subgaussian Variables and Concentration</title>
  <link>https://www.bowaggoner.com/blog/2017/12-18-subgaussianity/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/12-18-subgaussianity/index.html</guid>
  <description>In this post we'll take a look at the definition of subgaussian random variables and see how those are used in measure concentration.</description>
  <pubDate>2017-12-18</pubDate>
</item>
<item>
  <title>Intro to Measure Concentration</title>
  <link>https://www.bowaggoner.com/blog/2017/10-07-measure-concentration/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/10-07-measure-concentration/index.html</guid>
  <description>This post will introduce the concept of &#39;tail bounds&#39; or &#39;measure concentration&#39; and cover the basics of Markov's, and Chebyshev's, and Chernoff-type bounds and how they are proven.</description>
  <pubDate>2017-10-07</pubDate>
</item>
<item>
  <title>Useful Bounds via Taylor's Theorem</title>
  <link>https://www.bowaggoner.com/blog/2017/10-06-useful-bounds-taylors/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/10-06-useful-bounds-taylors/index.html</guid>
  <description>In this post, we'll take a look at some common and useful bounds on the exponential and logarithm functions and see how they're derived using Taylor's Theorem.</description>
  <pubDate>2017-10-06</pubDate>
</item>
<item>
  <title>Prediction Markets</title>
  <link>https://www.bowaggoner.com/blog/2017/10-03-prediction-markets/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/10-03-prediction-markets/index.html</guid>
  <description>This post will introduce prediction markets based on cost functions and/or proper scoring rules. It will focus on intuition and background rather than technical details.</description>
  <pubDate>2017-10-03</pubDate>
</item>
<item>
  <title>Risk Aversion and Decisionmaking</title>
  <link>https://www.bowaggoner.com/blog/2017/09-28-risk-aversion-and-decisionmaking/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/09-28-risk-aversion-and-decisionmaking/index.html</guid>
  <description>In this post, I want to review the basics of risk aversion for you, then argue that risk-aversion naturally arises in any situation where agents face a decision under uncertainty.</description>
  <pubDate>2017-09-28</pubDate>
</item>
<item>
  <title>Eliciting Finite Properties</title>
  <link>https://www.bowaggoner.com/blog/2017/04-22-finite-properties/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/04-22-finite-properties/index.html</guid>
  <description>In this post we'll look at eliciting &#39;finite properties&#39; of distributions in other words, multiple-choice questions about some unknown or future event.
This is a continuation of the &lt;a href=&#39;http://www.bowaggoner.com/blog/series.html#convexity-elicitation&#39;&gt;series on elicitation&lt;/a&gt;.</description>
  <pubDate>2017-04-22</pubDate>
</item>
<item>
  <title>Eliciting Properties of Distributions</title>
  <link>https://www.bowaggoner.com/blog/2017/04-12-eliciting-properties/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/04-12-eliciting-properties/index.html</guid>
  <description>Continuing the &lt;a href=&#39;http://www.bowaggoner.com/blog/series.html#convexity-elicitation&#39;.&gt; series on elicitation&lt;/a&gt;, we'll take a look at &lt;em&gt;properties&lt;/em&gt; or statistics of distributions: things like the mode, median, or variance.
This post will focus on the defining elicitation of properties and showing how the convex characterization of proper scoring rules extends. We'll look at more concrete cases and implications in later posts.</description>
  <pubDate>2017-04-12</pubDate>
</item>
<item>
  <title>k-Way Collisions of Balls in Bins</title>
  <link>https://www.bowaggoner.com/blog/2017/01-06-collisions-in-bins/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2017/01-06-collisions-in-bins/index.html</guid>
  <description>A basic probability question is what happens when we draw i.i.d. samples from a distribution, often referred to as &#39;throwing balls into bins&#39;.
Here I just want to show how the number of &#39;k-way collisions&#39; can give a simple yet useful analysis for, especially, the size of the max-loaded bin.</description>
  <pubDate>2017-01-06</pubDate>
</item>
<item>
  <title>Convex Duality</title>
  <link>https://www.bowaggoner.com/blog/2016/10-20-convex-duality/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2016/10-20-convex-duality/index.html</guid>
  <description>Every convex function has a special relative (or perhaps &#39;evil twin&#39;) called its &lt;em&gt;conjugate&lt;/em&gt; or &lt;em&gt;dual&lt;/em&gt;.
In this post, we'll walk through the definition both formally and visually.</description>
  <pubDate>2016-10-20</pubDate>
</item>
<item>
  <title>Divergences and Value of Information</title>
  <link>https://www.bowaggoner.com/blog/2016/10-07-value-divergences/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2016/10-07-value-divergences/index.html</guid>
  <description>This is a follow-up to &lt;a href=&#39;http://www.bowaggoner.com/blog/2016/09-24-generalized-entropies&#39;&gt;generalized entropies and the value of information&lt;/a&gt;, where we discussed how value of information connects to generalized entropies. Here, we'll connect both to Bregman divergences, filling in a third side of the triangle.</description>
  <pubDate>2016-10-07</pubDate>
</item>
<item>
  <title>Risk Aversion and Max-Entropy</title>
  <link>https://www.bowaggoner.com/blog/2016/10-02-risk-aversion-entropy/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2016/10-02-risk-aversion-entropy/index.html</guid>
  <description>I want to share a nice little problem that came up in discussion between &lt;a href=&#39;http://yiling.seas.harvard.edu/&#39;&gt;Yiling Chen&lt;/a&gt;, &lt;a href=&#39;http://madhu.seas.harvard.edu/&#39;&gt;Madhu Sudan&lt;/a&gt;, and myself.
It turns out to have a cute solution that connects the geometry of proper scoring rules with a &#39;max-entropy&#39; rule.</description>
  <pubDate>2016-10-02</pubDate>
</item>
<item>
  <title>Generalized Entropies and the Value of Information</title>
  <link>https://www.bowaggoner.com/blog/2016/09-24-generalized-entropies/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2016/09-24-generalized-entropies/index.html</guid>
  <description>In this post, I'll discuss axioms for entropy functions that generalize Shannon entropy and connect them to the value of information for a rational decision maker.
We'll see that &lt;a href=&#39;http://www.bowaggoner.com/blog/2016/09-20-convexity/&#39;&gt;convexity&lt;/a&gt; turns out to be a natural and simple axiom that nicely links the two settings.</description>
  <pubDate>2016-09-24</pubDate>
</item>
<item>
  <title>Proper Scoring Rules</title>
  <link>https://www.bowaggoner.com/blog/2016/09-22-proper-scoring-rules/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2016/09-22-proper-scoring-rules/index.html</guid>
  <description>How can we elicit truthful predictions from strategic agents?
Proper scoring rules give a surprisingly complete and mathematically beautiful answer.
In this post, we'll work up to the classic characterization of all &#39;proper&#39; (truthful) scoring rules in terms of convex functions of probability distributions.</description>
  <pubDate>2016-09-22</pubDate>
</item>
<item>
  <title>Convexity</title>
  <link>https://www.bowaggoner.com/blog/2016/09-20-convexity/index.html</link>
  <guid>https://www.bowaggoner.com/blog/2016/09-20-convexity/index.html</guid>
  <description>Convexity is a simple, intuitive, yet surprisingly powerful mathematical concept.
It shows up repeatedly in the study of efficient algorithms and in game theory.
The goal of this article to is to review the basics so we can put convexity to use later.</description>
  <pubDate>2016-09-20</pubDate>
</item>
</channel>
</rss>