最近推导了一遍Particle Filter的原理
有了更详细的理解,写了一个小总结。主要是关于自己的理解。
因为是用Latex写的,所以没办法黏贴过来了,
所以就把Latex的代码黏贴在了下面。希望对同样学习中的同学有所帮助。
\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size \usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs \usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default \usepackage[english]{babel} % English language/hyphenation \usepackage{amsmath,amsfonts,amsthm} % Math packages \usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template \usepackage{sectsty} % Allows customizing section commands \allsectionsfont{\centering \normalfont\scshape} % Make all sections centered, the default font and small caps \usepackage{fancyhdr} % Custom headers and footers \pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers \fancyhead{} % No page header - if you want one, create it in the same way as the footers below \fancyfoot[L]{} % Empty left footer \fancyfoot[C]{} % Empty center footer \fancyfoot[R]{\thepage} % Page numbering for right footer \renewcommand{\headrulewidth}{0pt} % Remove header underlines \renewcommand{\footrulewidth}{0pt} % Remove footer underlines \setlength{\headheight}{13.6pt} % Customize the height of the header \numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) \numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) \numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) \setlength\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text %---------------------------------------------------------------------------------------- % TITLE SECTION %---------------------------------------------------------------------------------------- \newcommand{\horrule}[1]{\rule{\linewidth}{#1}} % Create horizontal rule command with 1 argument of height \title{ \normalfont \normalsize \huge Particle Filter Study \\ % The assignment title } \author{Truman Nie} % Your name \date{\normalsize\today} % Today's date or a custom date \begin{document} \maketitle % Print the title %---------------------------------------------------------------------------------------- % PROBLEM 1 %---------------------------------------------------------------------------------------- \section{What is Particle Filter?} Of course, Particle filter must be a best optimizing model. It can be used in may application such as tracking, location recommendation. I pay my attention in it because I do my research on tracking person and other target. I often use particle filter to build some project. However, I find that I do not know it accurately. I decide some time to know about this optimizing model.\\\\ Next section, I will explain it in my opinion. The goal of particle filter aims to estimate the sequence of hidden parameters, $x_k$ for $k=0,1,2,3.....$,based only on the observed data $y_k$ for $k=0,1,2,3,....$, All Bayesian estimates of $x_k$ follow from the posterior distribution $p(x_k|y_0,y_1,.....,y_k)$. in contrast, the MCMC or improtance sampling approach would model the full posterior $p(x_0,x_1,....,x_k|y_0,y_1,.....y_k)$.\\\\ I think all particle filter problem should convert to the follow type: \begin{align} \begin{split} x_k = g(x_{k-1})+w_k\\ y_k = h(x_k)+v_k \end{split} \end{align} where both $w_k$ and $v_k$ are mutually independent and identically distributed sequences with known probability density functions and $g(.)$ and $h(.)$ are known functions. These two equations can be view as state space equations and look similar to the state space equations for the Kalman filter. If the functions $g(.)$ and $h(.)$ are linear, and if both $w_k$ and $v_k$ are Gaussian, the kalman fiter finds the exact Bayesian fitering distribution. If not, Kalman filter based methods are a first-order approximation(EKF) or a second-order approximation(UKF in general, but if probability distribution is gaussian a third-order approximation is possible). Particle fiters are also an approximation, but with enough particles they can be much more accurate. \\\\ In this filed, particle filter can get better result then Kalman filter. However, it also need enough particels, which will directly increase the calculated amount of system. %------------------------------------------------ \section{Monte Carlo Approximation} Particle methods, like all sampling-based approaches(e.g.,MCMC), generate a set of samples that approximate the filtering distribution $p(x_k|y_0,....,y_k)$. So, with $P$ samples, expectations with respect to the filtering distribution are approximated by \begin{align} \int f(x_k)p(x_k|y_0,...,y_k)dx_k\approx\frac{1}{P}\sum_{L=1}^{P}f(x_{k}^{L}) \end{align} and $f(.)$, in the usual way for monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.\\\\ Monte Carlo only is a function or define. It allows us using some or little samples to estimate the whole distribution. We just need to remember this formula.\\ %------------------------------------------------ \section{Sequential Importance Resampling (SIR)} Sequential importance resampling (SIR), the original particle filtering algorithm, is a very commonly used particle filtering algorithm, which approximates the filtering distribution $p(x_k|y_0,...,y_k)$ by a weighted set of $P$ particles. such as: \begin{align*} \begin{Bmatrix} (w_k^L,x_k^L):L\in{1,...,P} \end{Bmatrix} \end{align*} The importance weights $w_k$ must satisfy \begin{align*} \sum_{L=1}^{P}w_k^L=1 \end{align*} Resampling is used to avoid the problem of degeneracy of the algorithm, that is , avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. In the next section, I will explain about particle filter step. %---------------------------------------------------------------------------------------- % PROBLEM 2 %---------------------------------------------------------------------------------------- \section{Particle Filter Step} In this section. I will give a single step of sequential of particle filter. \\ 1)We need to use sample to estimate the PDF, so we need to initialize the first set of sample. $G_0={x_0,...x_k,w_0,...,w_k}$\\ 2)we use function 1.1 to get the next state of $x_k$. \begin{align} x_k^L\sim f(x_k|x_{0:k-1}^L,y_{0:k}) \end{align} 3)for $L=1,....,P$ update the importance weights up to a normalizing constant: \begin{align} w_k^L=w_{k-1}^Lp(y_k|x_k^L) \end{align} 4)then we need to compute the normalized importance weights: \begin{align} w_k^L=frac{\hat{w}_k^L}{sum_{J=1}^{P}\hat{w}_k^J} \end{align} 5)compute an estimate of the effective number of particles as \begin{align} \hat{N}_eff=frac{1}{sum_{L=1}{P}(w_k^L)^2} \end{align} 6)if the effective number of particles is less than a given threshold $hat{N}_eff<N_thr$, then perform resampling:\\\\ a)draw $P$ particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.\\ b)for $L=1,...,P$ set $w_k^L=1/P$ \\\\ This is a the original particle filter. The term sampling importance resampling is also sometimes used then referring to SIR filters. \section{Some Question} I think some people must want to ask Monte Carlo do what? This is my question when I first see particle filter. \\\\ In the above section. we could find function of updating weight. How to get this update function? Yes. we need Monte Carlo to get this update function. In the next step, I will explain it carefully.\\\\ Why we need weight? Because, we need know which element is important and which elements can be deleted. In the other word, we need to know the probability of element or particles. Now, we assume that we have the prior distribution $p(U)$ and the sample ${u_i}$ and weight $w_i$. We need to get the posterior distribution $p(U|V=v_0)$ and the weight $w'_i$. \begin{align} \begin{split} p(U|V=v_0)&=\frac{p(V=v_0|U)p(U)}{\int p(V=v_0|U)p(U)dU}\\ &=\frac{p(V=v_0|U)p(U)}{K} \end{split} \end{align} Then we should get a approximate explain: \begin{align} k=\int p(V=v_0|U)p(U)dU\simeq \frac{1}{N}\sum_{i=1}^{N}p(V=v_0|u_i)w_i \end{align} So we could get the expectation of posterior distribution: \begin{align} \begin{split} \int p(U|V=v_0)g(U)du&=\frac{1}{K}\int p(V=v_0|U)p(U)g(U)dU\\ &\simeq \frac{1}{K}\frac{1}{N}\sum_{i-1}^N p(V=v_0|U_i)g(u_i)w_i\\ &\simeq frac{\sum_{i-1}^{N}p(V=v_0|U)g(u_i)w_i}{\sum_{i-1}^N p(V=v_0|u_i)w_i} \end{split} \end{align} Now we could find that $p(V=v_0|u_i)w_i$ equal the weight of posterior distribution. In other word is: \begin{align} w'_i=p(V=v_0|u_i)w_i \end{align} Until now, we could get the update function of weight, which is used in the particle system. \\\\ In my opinion, particle filter is not only a filter, but also a framework. If we could build a match model like the function 1.1, we could use particle filter to get the best optimal solution in many different project. \end{document}