My Note of Diffusion Models

Diffusion Models

Links: https://theaisummer.com/diffusion-models/

Markovian Hierachical VAE

rvs:

  • data: x 0 x_{0} x0,
  • representation: x T x_{T} xT

( p ( x 0 , x 1 , ⋯   , x T ) , q ( x 1 , ⋯   , x T ∣ x 0 ) ) (p(x_0,x_1,\cdots,x_T),q(x_1,\cdots,x_{T}|x_0)) (p(x0,x1,,xT),q(x1,,xTx0))
where x 1 , ⋯   , x T x_1,\cdots,x_T x1,,xT is unobservable, and

  • generative model/backward trajectory:
    p ( x 0 , x 1 , ⋯   , x T ) = p ( x T ) ∏ t p ( x t − 1 ∣ x t ) p(x_0,x_1,\cdots,x_T)=p(x_T)\prod_tp(x_{t-1}|x_{t}) p(x0,x1,,xT)=p(xT)tp(xt1xt)
  • forward trajectory(Markov process):
    q ( x 1 , ⋯   , x T ∣ x 0 ) ) = ∏ t q ( x t ∣ x t − 1 ) q(x_1,\cdots,x_{T}|x_0))=\prod_tq(x_{t}|x_{t-1}) q(x1,,xTx0))=tq(xtxt1)

E L B O : = ∫ q ( x T ∣ x 0 ) log ⁡ p ( x T ) q ( x T ∣ x 0 ) d x T + ∑ t = 2 T ∫ q ( x t − 1 , x t ∣ x 0 ) log ⁡ p ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) d x t − 1 x t + ∫ q ( x 1 ∣ x 0 ) log ⁡ p ( x 1 ∣ x 0 ) d x 1 ELBO:=\int q(x_{T}|x_{0}) \log \frac{p(x_{T})}{q(x_{T}|x_{0})}\mathrm{d}x_{T}\\ +\sum_{t=2}^T \int q(x_{t-1},x_{t}|x_{0})\log \frac{p(x_{t-1}|x_{t})}{q(x_{t-1}|x_{t}, x_{0})}\mathrm{d}x_{t-1}x_{t}\\+\int q(x_{1}|x_{0})\log p(x_{1}|x_{0})\mathrm{d}x_{1} ELBO:=q(xTx0)logq(xTx0)p(xT)dxT+t=2Tq(xt1,xtx0)logq(xt1xt,x0)p(xt1xt)dxt1xt+q(x1x0)logp(x1x0)dx1

Loss

L o s s : = − E L B O = D K L ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) + ∑ t = 2 T ∫ q ( x t ∣ x 0 ) d x t D K L ( q ( x t − 1 ∣ x t , x 0 ) ∥ p ( x t − 1 ∣ x t ) ) − ∫ q ( x 1 ∣ x 0 ) log ⁡ p ( x 1 ∣ x 0 ) d x 1 Loss:=-ELBO= D_{KL} (q(x_{T}|x_{0})\| p(x_{T}))\\ +\sum_{t=2}^T \int q(x_{t}|x_{0})\mathrm{d}x_{t}D_{KL}(q(x_{t-1}|x_{t}, x_{0})\|p(x_{t-1}|x_{t}))\\-\int q(x_{1}|x_{0})\log p(x_{1}|x_{0})\mathrm{d}x_{1} Loss:=ELBO=DKL(q(xTx0)p(xT))+t=2Tq(xtx0)dxtDKL(q(xt1xt,x0)p(xt1xt))q(x1x0)logp(x1x0)dx1

  • prior matching term
  • denoising matching term
  • reconstruction term

Diffusion Models

basic assumption

  • tractable distr: p ( x T ) p(x_{T}) p(xT)
  • forward trajectory(Markov process): q ( x t ∣ x t − 1 ) q(x_{t}|x_{t-1}) q(xtxt1) is fixed (has no unlearned parameter)

Definition(Diffusion Model)

  • tractable distr: p ( x T ) ∼ N ( 0 , 1 ) p(x_{T})\sim N(0,1) p(xT)N(0,1)
  • generative model/backward trajectory: p ( x t − 1 ∣ x t ) ∼ N ( μ ( t ) , Σ ( t ) ) p(x_{t-1}|x_{t})\sim N(\mu(t),\Sigma(t)) p(xt1xt)N(μ(t),Σ(t))
  • forward trajectory(Gaussian diffusion): q ( x t ∣ x t − 1 ) ∼ N ( x t − 1 1 − β t , β t ) q(x_{t}|x_{t-1})\sim N(x_{t-1}\sqrt{1-\beta_t},\beta_t) q(xtxt1)N(xt11βt ,βt),

Parameters:

  • β t = 1 − α t \beta_t=1-\alpha_t βt=1αt or α ˉ t : = ∏ t α t \bar{\alpha}_t:=\prod_t\alpha_t αˉt:=tαt: noise schedule, where α t \alpha_t αt is small
  • α ˉ t \sqrt{\bar{\alpha}_t} αˉt : signal rate

My Note of Diffusion Models_第1张图片

Fact.

  • q ( x t ∣ x 0 ) ∼ N ( x 0 α ˉ t , 1 − α ˉ t ) q(x_{t}|x_{0})\sim N(x_{0}\sqrt{\bar{\alpha}_t},1-\bar{\alpha}_t) q(xtx0)N(x0αˉt ,1αˉt)
  • q ( x t − 1 ∣ x t , x 0 ) ∼ N ( μ q ( x t , x 0 ) , σ 2 ( t ) ) q(x_{t-1}|x_{t},x_{0})\sim N(\mu_q(x_t ,x_0),\sigma^2(t)) q(xt1xt,x0)N(μq(xt,x0),σ2(t)) where
    μ q ( x t , x 0 ) : = α t ( 1 − α ˉ t − 1 ) x t − α ˉ t − 1 ( 1 − α t ) x 0 1 − α ˉ t = 1 α t x t − β t 1 − α ˉ t α t ϵ 0 \mu_q(x_t,x_0):=\frac{\sqrt{\alpha_t}(1-\bar\alpha_{t-1})x_t-\sqrt{\bar\alpha_{t-1}}(1-\alpha_{t})x_0}{1-\bar\alpha_t}\\ =\frac{1}{\sqrt{\alpha_t}}x_t-\frac{\beta_t}{\sqrt{1-\bar\alpha_t}\sqrt{\alpha_t}}\epsilon_0 μq(xt,x0):=1αˉtαt (1αˉt1)xtαˉt1 (1αt)x0=αt 1xt1αˉt αt βtϵ0
    and σ 2 ( t ) : = 1 − α ˉ t − 1 1 − α ˉ t β t \sigma^2(t):=\frac{1-\bar\alpha_{t-1}}{1-\bar\alpha_t}\beta_{t} σ2(t):=1αˉt1αˉt1βt.

Design I: p ( x t − 1 ∣ x t ) ∼ N ( μ ( t ) , Σ ( t ) ) p(x_{t-1}|x_{t})\sim N(\mu(t),\Sigma(t)) p(xt1xt)N(μ(t),Σ(t)):
μ ( t ) = α t ( 1 − α ˉ t − 1 ) x t − β t α ˉ t − 1 x ^ ( x t , t ) 1 − α ˉ t Σ ( t ) = σ 2 ( t ) \mu(t)=\frac{\sqrt{\alpha_t}(1-\bar\alpha_{t-1})x_t-\beta_{t}\sqrt{\bar\alpha_{t-1}}\hat{x}(x_t,t)}{1-\bar\alpha_t}\\ \Sigma(t)=\sigma^2(t) μ(t)=1αˉtαt (1αˉt1)xtβtαˉt1 x^(xt,t)Σ(t)=σ2(t)

Design II: p ( x t − 1 ∣ x t ) ∼ N ( μ ( t ) , Σ ( t ) ) p(x_{t-1}|x_{t})\sim N(\mu(t),\Sigma(t)) p(xt1xt)N(μ(t),Σ(t)):
μ ( t ) = 1 α t x t − β t 1 − α ˉ t α t ϵ ^ ( x t , t ) Σ ( t ) = σ 2 ( t ) \mu(t)=\frac{1}{\sqrt{\alpha_t}}x_t-\frac{\beta_t}{\sqrt{1-\bar\alpha_t}\sqrt{\alpha_t}}\hat{\epsilon}(x_t,t)\\ \Sigma(t)=\sigma^2(t) μ(t)=αt 1xt1αˉt αt βtϵ^(xt,t)Σ(t)=σ2(t)

Fact.
Under the design I:
D K L ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) = 1 2 σ t 2 ( 1 − α ˉ t − 1 ) β t 2 ( 1 − α ˉ t ) 2 ∥ x ^ ( x t , t ) − x 0 ∥ 2 = 1 2 ( 1 1 − α ˉ t − 1 − 1 1 − α ˉ t ) ∥ x ^ ( x t , t ) − x 0 ∥ 2 D_{KL} (q(x_{t−1}|x_t , x_0) \| p_θ (x_{t−1} |x_t))=\frac{1}{2\sigma_t^2}\frac{(1-\bar{\alpha}_{t-1})\beta_t^2}{(1-\bar{\alpha}_{t})^2}\|\hat{x}(x_t,t)-x_0\|^2\\ =\frac{1}{2}(\frac{1}{1-\bar{\alpha}_{t-1}}-\frac{1}{1-\bar{\alpha}_{t}})\|\hat{x}(x_t,t)-x_0\|^2 DKL(q(xt1xt,x0)pθ(xt1xt))=2σt21(1αˉt)2(1αˉt1)βt2x^(xt,t)x02=21(1αˉt111αˉt1)x^(xt,t)x02

Under the design II:
D K L ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) = 1 2 σ t 2 β t 2 ( 1 − α ˉ t ) α t 2 ∥ ϵ ^ ( x t , t ) − ϵ 0 ∥ 2 D_{KL} (q(x_{t−1}|x_t , x_0) \| p_θ (x_{t−1} |x_t))=\frac{1}{2\sigma_t^2}\frac{\beta_t^2}{(1-\bar{\alpha}_{t})\alpha_t^2}\|\hat{\epsilon}(x_t,t)-\epsilon_0\|^2 DKL(q(xt1xt,x0)pθ(xt1xt))=2σt21(1αˉt)αt2βt2ϵ^(xt,t)ϵ02

Algorithm

Loss:
L = ∑ t L t L t ≈ ∑ ϵ ∼ N ( 0 , 1 ) ∥ ϵ − ϵ ^ ( x t , t ) ∥ 2 , ( 0 ≤ t < T ) L=\sum_t L_t\\ L_t\approx \sum_{\epsilon\sim N(0,1)}\|\epsilon-\hat{\epsilon}(x_{t},t)\|^2,(0\leq tL=tLtLtϵN(0,1)ϵϵ^(xt,t)2,(0t<T)
where x t : = α ˉ t x 0 + 1 − α ˉ t ϵ x_{t}:=\sqrt{\bar{\alpha}_t} x_0 + \sqrt{1-\bar{\alpha}_t}\epsilon xt:=αˉt x0+1αˉt ϵ.

train NN ϵ ^ \hat\epsilon ϵ^ by data { ( ϵ ^ ( x t ( x 0 , i , ϵ i l ) , t ) , ϵ i l ) , ϵ i l ∼ N ( 0 , 1 ) , l = 1 , ⋯   , L } \{(\hat{\epsilon}(x_{t}(x_{0,i},\epsilon_{il}),t),\epsilon_{il}),\epsilon_{il}\sim N(0,1),l=1,\cdots, L\} {(ϵ^(xt(x0,i,ϵil),t),ϵil),ϵilN(0,1),l=1,,L} with size of N L NL NL for each t t t


Exercise

  1. Given a latent variable model p ( x , z ) p(x,z) p(x,z) with variational distr. q ( z ∣ x ) q(z|x) q(zx). q ( x ) q(x) q(x) represents data distr. and let q ( x , z ) = q ( z ∣ x ) q ( x ) q(x,z)=q(z|x)q(x) q(x,z)=q(zx)q(x).
    ∫ q ( x ) L x = ∫ q ( x , z ) log ⁡ p ( x , z ) q ( z ∣ x ) ∼ D K L ( q ( x , z ) ∥ p ( x , z ) ) \int q(x)L_x=\int q(x,z)\log\frac{p(x,z)}{q(z|x)}\sim D_{KL}(q(x,z)\|p(x,z)) q(x)Lx=q(x,z)logq(zx)p(x,z)DKL(q(x,z)p(x,z))
    where L x L_x Lx is LEBO.

References

  1. Jonathan Ho, Ajay Jain, Pieter Abbeel. Denoising Diffusion Probabilistic Models, 2020.
  2. Calvin Luo, Understanding Diffusion Models: A Unified Perspective, 2022

你可能感兴趣的:(python,开发语言,Diffusion,Model,扩散模型,机器学习)