Loading [MathJax]/jax/output/HTML-CSS/jax.js

Rylan Schaeffer

Logo
Resume
Research
Learning
Blog
Teaching
Jokes
Kernel Papers
Failures


“I'd still be mad at the world even if it apologized to me.“

Dependent Dirichlet Process

The Dependent Dirichlet Process (DDP) is a modified version of the Dirichlet Process that essentially defines a Markov chain of DPs. The idea is that the paired traits and probabilities of the DP can change over time: they can be born, move or die.

Definition

Let DDP(μ) where μ:ΩR+ is the base measure and αμ=Ωdμ is the concentration parameter. We can think of D as an infinite sum of traits and probabilities:

D=k=1θkπkΩ×R

The DDP is a Markov chain of DPs (D1,D2,...) where transitions are governed by 3 stochastic operations:

  1. Subsampling (death): Define q:Ω[0,1]. Then, for each (θ,π)Dt1, sample bθBernoulli(q(θ)); in English, for each trait, flip a coin with probability q(θk). Keep only the traits which come up 1 (heads) and renormalize the random probability measure:
DsubsampletDP(qμt1)

where

(qμ)(A)=Aq(θ)μ(θ)
  1. Transition (move): Define distribution T:Ω×ΩR+. For each (θ,π) in Dsubsamplet, sample θT(θ|θ) and set
DtransitiontDP(Tqμt1)

where

(tμ)(A)=AΩT(θ|θ)μ(dθ)

Intuitively, this just says that the locations of the DPs transition according to T.

  1. Superposition (birth): Sample FDP(ν) and sample (cD,cF)Dir(Tqμt1(Ω),v(Omega)). Then set Dt as the union of (θ,cDπ) for all Dtransitiont and (θ,cFπ) for all (θ,π)F. More succinctly,
DtDP(Tqμt1+ν)

After these three steps have been taken, we have the next DP in the Markov chain of DPs defined as the Dependent Dirichlet Process.

Low Variance Asymptotics