文献【1】和专利【2】描述了一种算法叫VIBE,将在后面详述。
背景差分图像解决问题的方法是:通过用现在的图像去对比已知的观察图像(背景图像),该观察图像不含有任何感兴趣的对象,是背景模型(或背景图像)[3]。这个对比过程被称为前景检测。该过程将观测图像分为两个互补的像素集合,这两个集合覆盖整个图像,包括:1)包含了兴趣对象的前景,和2)前景的补集,背景。
在【4】中说明,以现有的技术,在背景减除法中,如何检测和定义前景区域和对象,是没有很好的规则的。
有许多背景减除算法,它们提出了许多模型和分割策略【3-9】。一些算法能够满足一些特殊需求。如【7】,它必须适用于平缓或快速的光照变化(一天内不同时间的光照,云彩遮挡等),运动变化(摄像机运动),复杂的背景对象(树叶树枝等)以及背景的变化(如停泊车辆等)。一些运用要求将背景减除法嵌入到摄像机硬件中,故算法负载是其最为关心的问题。对户外场景监控来讲,针对噪声的鲁棒性和适应光照变化的能力则是其基本要求。
像素点之间相互独立,是许多文献实现该方法的前提条件。这些方法降低了后续过程的难度,并企图在结果中加入空间一致性的部分形式。由于个别像素点的扰动,使这些点的区域常常被误分类。而Seiki等人在文献【10】中,假定背景像素点周围的块在一定时间内具有相似的变化。虽然这个假设大多数时候是正确的,特别是像素点与周边块同属于一个物体时,但是在多物体背景时,周边像素属于其它物体,则这个假设就有问题。但尽管有这些问题,像素点依然可以按照N*N的块聚合成为由N2的元素组成的矢量。一些例子对一定时间内形成的每个块使用PCA算法。一个背景中8邻域的块,使用PCA重建得到的参数,与新视频帧的块进行比较,如果参数相近,则新块被分类为背景。这种方法于文献【11】中说明,但该方法中,块模型缺乏随时间更新的机制。文献【12】中,作者研究了PCA重建中的错误。虽然,PCA模型也随着时间而变化,但其模型只占总图像的一部分。当前图像和使用PCA参数描述的图像空间的反向工程图像相减,可以容易地得到图像差异阈值,通过该阈值,可以将每个像素点分类为前景或背景。作为基于PCA的方法,初始化和更新机制都未能描述。
文献【13】描述了对训练的视频图像序列使用独立成分分析算法,得到分层的矢量,为了将前景从背景中分割出来,将这些分层矢量用于计算和比较。该方法被描述为对室内光照变化具有鲁棒性。
文献【14】介绍了基于分类器的双层机制。该分类器首先决定图像块属于前景还是背景,随后根据分类结果,智能地对进行更新。其分类算法为背景模型的动态模式通过自组织人工神经网络学习,见文献【15】。
在文献【16】中,通过学习和适应背景的低维特征,建立一种基于压缩感知方式的背景减除算法。其优点在于压缩感知在估算对象轮廓时不需要任何辅助的图像重建过程。而且,只需要背景中的对象的小部分,就可以把它们正确检测地检测出来。
在文献【17】中,背景减除法由很少的错误恢复组成。作者假设视频中每个颜色通道是独立的模型,这个模型是视频帧中相同颜色通道的线性组合。因此,该方法通过对每个颜色通道的按比例适当组合的方式,能够在通用结构不变的情况下,对由于光源变化而产生的全局变化进行精确地补偿。
在文献【18】中,背景估计公式化为最优化标签问题。其中,每个背景中的像素点都标上帧数,以表明以往帧中,哪种颜色需要复制。该算法中,通过复制输入帧的部分区域来组合背景图像。因为算法输出为单独的静态帧,故不适应于背景变化的情况,但针对静态背景,试验结果很好。
受到运动感知群的生物机制的启发,文献【19】提出时空特性算法,适用于背景变换频繁的场景。与同时期的方法相比,该方法降低了平均错误率,时间代价太高(每帧都需要花费数秒),不适用于实时应用环境。
通过更新模型参数,基于像素点的背景减除法可有弥补其空间一致性问题。其中,最简单的办法是使用静态背景框架(文献【20】使用了该方法),运行加权平均【21】,一级低通滤波【22】,时间中值滤波【23】【24】,建立像素点的高斯模型【25-27】。
Wiener [28]或者 Kalman滤波【29】,是使用概率统计方法在短时间内对背景帧进行预测。【28】中,在像素层操作中加入帧这一层的操作方法,以适应背景突然的全局性的变化。中值和高斯模型可有让内部像素比外部像素的权值更高【30】【31】。【32】中提出了正确初始化高斯背景模型的方法。
文献【33】提出的W4模型是一个相对简单但有效的方法。它用3个值表达背景图像中的每一个像素点:最小和最大的亮度值,以及用于训练的连续图像序列间最大的光亮度差分值。文献【34】对W4方法进行了改进:在W4模型中加入了阴影检测和去除方法。
采用∑-Δ(sigma-delta)的运动检测过滤器【35-37】方法,在嵌入式处理中【38-39】很常用。在模数转换中,可用背景图像的一个简单的非线性递归近似值组成∑-Δ运动检测滤波器。该滤波器是基于对比和元素增/减量的(通常-1、0、1是仅有的更新值)。故∑-Δ运动检测滤波器适用于很多嵌入式这样缺乏浮点运算单元的系统。
所有这些单一模型的方法在某些环境中可以得到令人满意的结果,而且它们简单、速度快、容易实现。但是,如果处在更加复杂的环境中,就需要更加复杂的模型。这些环境包括移动背景、摄像机运动,以及对噪声高度敏感的场合【5】。
多年来,许多复杂的基于处理像素级别的算法被提出。其中,最为流行的是高斯混合模型(GMM)【40】【41】。该模型首先由【40】提出,建模方法是记录每个像素点在不同时间的值,这些值组成了混合高斯模型的权重。这种背景像素模型能处理实际场所中遇到的多模的自然场景,在重复的运动背景情况下,如:树叶树枝等,处理的结果很好。引入GMM模型后,该模型在计算机视觉领域广泛使用,[4], [7], [11], [42]–[44], 现在大家对它依然很有兴趣,并且提出了增强算法[45]–[50]。【51】中,使用粒子群优化方法来自动获取GMM模型的参数。【52】中,使用了基于颜色直方图和纹理信息的分区域的算法来组合GMM模型,其实验结果优于传统的GMM模型。但是,其计算代价较大,在Intel Xeon 5150处理器硬件环境下,该算法只能处理7帧640*480分辨率的视频图像。
GMM算法的另一个不足之处在于它的强假设条件:背景可见的频率超过前景,并且背景的方差值较小。但该假设对每个时间窗口无效。而且,如【53】所述,如果背景高低频变化频繁,算法的敏感度不能精确调节,模型不能与目标相适应,故高频目标可能损失。同样,在真实世界中的噪声环境中,模型参数(特别是方差)也可能出问题。这导致人们不得不在使用硬件实现算法的时候采用固定的方差。最后,要注意到自然图像很多其实并非是高斯模型【54】。
由于找到适当的概率密度函数困难,一些人将注意力转到使用非参数方法来对背景的分布进行建模。非参数核密度估计方法【53】是其中做的较好的。通过观察像素点过去的值,【55-59】能够得到对环境参数的精确估计。通过对每个像素过去值采样的累加,建立了一个背景值的直方图。然后,利用这些直方图,算法可以估计概率密度函数,并且可以判断当前帧中像素属于前景还是背景。非参数核密度估计方法将最新的观察值加入到像素模型中,可以表现背景中的高频变化事件。但是,由于算法采样先来现出的方式更新像素模型,对于能否正确处理变化速度的伴生事件,还值得怀疑。于是,一些方法将模型表现为两个模型:短期和长期模型【53】【60】。这对一些情况提供了方便的解决方案,只是如何决定时间间隔是个问题。对实际应用来讲,两个模型增加了参数,也就增加了调整参数的难度。ViBe方法把对采样值的生命周期策略加入其中,全面增加了检测质量。
在电报密码本算法【61】,【62】中,像素表现为电报密码本,它是由一个长的图像序列的背景模型的压缩形式。每个电报密码本由密语组成,密语包含了创新的颜色变换矩阵所表示的颜色变换。【63】中,改良了该算法,将像素的时空上下文加入其中。电报密码本算法可在较小的内存消耗条件下捕获长时间的背景运动。因此,电报密码本算法是一种典型的长序列训练算法,其电报密码本更新机制【62】条件为训练结束。但是,应注意到其更新机制并不允许创建一个新的电报密码本,这在背景的不变结构部分发生变化时可能出问题(如:室外场景中的免费停车场)。
文献【64】【65】中,不再需要精确的背景密度模型,而是采用“一致性”(consensus)的概念。保留背景像素的历史观察值缓存,如果一个新的像素值与存储的像素模型值相匹配,则把它划分为背景。这样做是希望避开胡乱假设谜底模型的问题,但由于用先进先出的更新模型来代替了像素值模型,也需要对问题有预先的了解(一定的先验知识),如:如果没有预先存储大量的像素采样值,背景运动地快慢就是个问题。该算法需要最少20个采样的存储缓存,但即使达到60个个采样的存储缓存,其效果也没有显著的提高。所以,该算法训练过程至少应该包含20帧。最后,为了处理明暗变化和背景物体的出现与消失,一致性算法加入了两个额外的机制来处理所有物体:一个是基于像素层的,一个是基于块层次的。
Vibe处理明暗变化和背景物体的出现与消失的方法与众不同,不需要过多地考虑这些问题。除了速度快之外,该方法将兴趣集中在新加入背景的对象模型(突然发现背景中一块区域开始运动)而不是背景中不动的物体。其主要贡献在于更新策略。主要思想是:从历史值收集采样值,当采样值加入到模型中时,更新模型并丢弃采样值。这种策略可保证像素模型的采样值在生命周期中呈平滑指数衰减的形式,并且能够使该方法在单模型、变速度、可接受的内存消耗的条件下,可以处理每个像素的伴随事件。
参考文献
[1] O. Barnich and M. Van Droogenbroeck, “ViBe: A powerful random technique to estimate the background in video sequences,” in Proc. Int. Conf. Acoust., Speech Signal Process., Apr. 2009, pp. 945–948.
[2] M. Van Droogenbroeck and O. Barnich, Visual Background Extractor p. 36, Jan. 2009, World Intellectual Property Organization, WO 2009/007198.
[3] A. McIvor, “Background subtraction techniques,” in Proc. Image Vis. Comput., Auckland, New Zealand, Nov. 2000.
[4] R. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: A systematic survey,” IEEE Trans. Image Process., vol. 14, pp. 294–307, Mar. 2005.
[5] Y. Benezeth, P. Jodoin, B. Emile, H. Laurent, and C. Rosenberger,“Review and evaluation of commonly-implemented background subtraction algorithms,” in Proc. IEEE Int. Conf. Pattern Recognit., Dec.2008, pp. 1–4.
[6] S. Elhabian, K. El-Sayed, and S. Ahmed, “Moving object detection inspatial domain using background removal techniques—State-of-art,” Recent Pat. Comput. Sci., vol. 1, pp. 32–54, Jan. 2008.
[7] M. Piccardi, “Background subtraction techniques: A review,” in Proc. IEEE Int. Conf. Syst., Man Cybern., The Hague, The Netherlands, Oct. 2004, vol. 4, pp. 3099–3104.
[8] D. Parks and S. Fels, “Evaluation of background subtraction algorithms with post-processing,” in Proc. IEEE Int. Conf. Adv. Video Signal Based Surveillance, Santa Fe, New Mexico, Sep. 2008, pp. 192–199.
[9] T. Bouwmans, F. El Baf, and B. Vachon, “Statistical background modeling for foreground detection: A survey,” in Handbook of Pattern Recognition and Computer Vision (Volume 4). Singapore: World Scientific, Jan. 2010, ch. 3, pp. 181–199.
[10] M. Seki, T.Wada, H. Fujiwara, and K. Sumi, “Background subtraction based on urrence of image variations,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Los Alamitos, CA, Jun. 2003, vol. 2, pp. 65–72.
[11] P. Power and J. Schoonees, “Understanding background mixture models for foreground segmentation,” in Proc. Image Vis. Comput., Auckland, New Zealand, Nov. 2002, pp. 267–271.
[12] N. Oliver, B. Rosario, and A. Pentland, “A Bayesian computer vision system for modeling human interactions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 831–843, Aug. 2000.
[13] D.-M. Tsai and S.-C. Lai, “Independent component analysis-based background subtraction for indoor surveillance,” IEEE Trans. Image Process., vol. 18, no. 1, pp. 158–167, Jan. 2009.
[14] H.-H. Lin, T.-L. Liu, and J.-C. Chuang, “Learning a scene background model via classification,” IEEE Signal Process. Mag., vol. 57, no. 5, pp. 1641–1654, May 2009.
[15] L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” IEEE Trans. Image Process., vol. 17, no. 7, pp. 1168–1177, Jul. 2008.
[16] V. Cevher, A. Sankaranarayanan, M. Duarte, D. Reddy, R. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in Proc. Eur. Conf. Comput. Vis., Oct. 2008, pp. 155–168.
[17] M. Dikmen and T. Huang, “Robust estimation of foreground in surveillance videos by sparse error estimation,” in Proc. IEEE Int. Conf. Pattern Recognit., Tampa, FL, Dec. 2008, pp. 1–4.
[18] S. Cohen, “Background estimation as a labeling problem,” in Proc. Int. Conf. Comput. Vis., Beijing, China, Oct. 2005, vol. 2, pp. 1034–1041.
[19] V. Mahadevan and N. Vasconcelos, “Spatiotemporal saliency in dynamic scenes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 1, pp. 171–177, Jan. 2010.
[20] M. Sivabalakrishnan and D. Manjula, “An efficient foreground detection algorithm for visual surveillance system,” Int. J. Comput. Sci. Network Sec., vol. 9, pp. 221–227, May 2009.
[21] A. Cavallaro and T. Ebrahimi, “Video object extraction based on adaptive background and statistical change detection,” in Proc. Vis. Commun. Image Process., Jan. 2001, pp. 465–475.
[22] A. ElMaadi and X.Maldague, “Outdoor infrared video surveillance: A novel dynamic technique for the subtraction of a changing background of IR images,” Infrared Phys. Technol., vol. 49, pp. 261–265, Jan. 2007.
[23] R. Abbott and L. Williams, “Multiple target tracking with lazy background subtraction and connected components analysis,” Mach. Vis. Appl., vol. 20, pp. 93–101, Feb. 2009.
[24] B. Shoushtarian and H. Bez, “A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking,” Pattern Recognit. Lett., vol. 26, pp. 5–26, Jan.2005.
[25] J. Cezar, C. Rosito, and S. Musse, “A background subtraction model adapted to illumination changes,” in Proc. IEEE Int. Conf. Image Process., Oct. 2006, pp. 1817–1820.
[26] J. Davis and V. Sharma, “Robust background-subtraction for person detection in thermal imagery,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Washington, DC, Jun. 2004, vol. 8, p. 128.
[27] C.Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Pfinder: Realtime tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 780–785, Jul. 1997.
[28] K. Toyama, J. Krumm, B. Brumitt, and M. Meyers, “Wallflower: Principles and practice of background maintenance,” in Proc. Int. Conf. Comput. Vis., Kerkyra, Greece, Sep. 1999, pp. 255–261.
[29] D. Koller, J. Weber, and J. Malik, “Robust multiple car tracking with occlusion reasoning,” in Proc. Eur. Conf. Comput. Vis., Stockholm, Sweden, May 1994, pp. 189–196.
[30] J. Davis and V. Sharma, “Background-subtraction in thermal imagery using contour saliency,” Int. J. Comput. Vis., vol. 71, pp. 161–181, Feb.2007.
[31] C. Jung, “Efficient background subtraction and shadow removal for monochromatic video sequences,” IEEE Trans. Multimedia, vol. 11, no. 3, pp. 571–577, Apr. 2009.
[32] D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons, and A. Jain, “A background model initialization algorithm for video surveillance,” in Proc. Int. Conf. Comput. Vis., Vancouver, BC, Jul. 2001, vol. 1, pp. 733–740.
[33] I. Haritaoglu, D. Harwood, and L. Davis, “ W4: Real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 809–830, Aug. 2000.
[34] J. Jacques, C. Jung, and S. Musse, “Background subtraction and shadow detection in grayscale video sequences,” in Proc. Brazilian Symp. Comput. Graph. Image Process., Natal, Brazil, Oct. 2005, pp. 189–196.
[35] A. Manzanera and J. Richefeu, “A robust and computationally efficient motion detection algorithm based on sigma-delta background estimation,” in Proc. Indian Conf. Comput. Vis., Graph. Image Process., Kolkata, India, Dec. 2004, pp. 46–51.
[36] A. Manzanera and J. Richefeu, “A new motion detection algorithm based on ∑-Δbackground subtraction and the Zipf law,” in Proc. Progr. Pattern Recognit., Image Anal. Appl., Nov. 2007, pp. 42–51.
[38] L. Lacassagne, A.Manzanera, J. Denoulet, and A.Mérigot, “High performance motion detection: Some trends toward new embedded architectures for vision systems,” J. Real-Time Image Process., vol. 4, pp. 127–146, Jun. 2009.
[39] L. Lacassagne, A. Manzanera, and A. Dupret, “Motion detection: Fast and robust algorithms for embedded systems,” in Proc. IEEE Int. Conf. Image Process., Cairo, Egypt, Nov. 2009, pp. 3265–3268.
[40] C. Stauffer and E. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Ft. Collins, CO, Jun. 1999, vol. 2, pp. 246–252.
[41] C. Stauffer and E. Grimson, “Learning patterns of activity using realtime tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 747–757, Aug. 2000.
[42] B. Lei and L. Xu, “Real-time outdoor video surveillance with robust foreground extraction and object tracking via multi-state transition management,” Pattern Recognit. Lett., vol. 27, pp. 1816–1825, Nov. 2006.
[43] Y. Wang, T. Tan, K. Loe, and J. Wu, “A probabilistic approach for foreground and shadow segmentation in monocular image sequences,” Pattern Recognit., vol. 38, pp. 1937–1946, Nov. 2005.
[44] Y. Wang, K. Loe, and J. Wu, “A dynamic conditional random field model for foreground and shadow segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 2, pp. 279–289, Feb. 2006.
[45] O. Barnich, S. Jodogne, and M. Van Droogenbroeck, “Robust analysis of silhouettes by morphological size distributions,” in Advanced Concepts for Intelligent Vision Systems (ACIVS 2006), Vol. 4179 of Lecture Notes on Computer Science. New York: Springer-Verlag, Sep. 2006, pp. 734–745.
[46] J.-S. Hu and T.-M. Su, “Robust background subtraction with shadow and highlight removal for indoor surveillance,” EURASIP J. Appl. Signal Process., vol. 2007, pp. 108–108, Jan. 2007.
[47] P. KaewTraKulPong and R. Bowden, “An improved adaptive background mixture model for real-time tracking with shadow detection,” in Proc. Eur. Workshop Adv. Video Based Surveillance Syst., London, U.K., Sep. 2001.
[48] D. Lee, “Effective Gaussian mixture learning for video background subtraction,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 827–832, May 2005.
[49] Q. Zang and R. Klette, “Robust background subtraction and maintenance,” in Proc. IEEE Int. Conf. Pattern Recognit., Washington, DC, Aug. 2004, vol. 2, pp. 90–93.
[50] Z. Zivkovic, “Improved adaptive gausian mixture model for background subtraction,” in Proc. IEEE Int. Conf. Pattern Recognit., Cambridge, U.K., Aug. 2004, vol. 2, pp. 28–31.
[51] B. White and M. Shah, “Automatically tuning background subtraction parameters using particle swarm optimization,” in Proc. IEEE Int. Conf. Multimedia Expo, Beijing, China, Jul. 2007, pp. 1826–1829.
[52] P. Varcheie, M. Sills-Lavoie, and G.-A. Bilodeau, “A multiscale region-based motion detection and background subtraction algorithm,” Sensors, vol. 10, pp. 1041–1061, Jan. 2010.
[53] A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proc. 6th Eur. Conf. Comput. Vis., London, U.K., Jun.–Jul. 2000, pp. 751–767.
[54] A. Srivastava, A. Lee, E. Simoncelli, and S.-C. Zhu, “On advances in statistical modeling of natural images,” J. Math. Imag. Vis., vol. 18, pp. 17–33, Jan. 2003.
[55] A. Elgammal, R. Duraiswami, D. Harwood, and L. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proc. IEEE, vol. 90, no. 7, pp. 1151–1163, Jul. 2002.
[56] A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Los Alamitos, CA, Jun.–Jul. 2004, vol. 2, pp. 302–309.
[57] Y. Sheikh and M. Shah, “Bayesian modeling of dynamic scenes for object detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 11, pp. 1778–1792, Nov. 2005.
[58] Z. Zivkovic and F. van der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognit. Lett., vol. 27, pp. 773–780, May 2006. [59] A. Tavakkoli, M. Nicolescu, G. Bebis, and M. Nicolescu, “Non-parametric statistical background modeling for efficient foreground region detection,” Mach. Vis. Appl., vol. 20, pp. 395–409, Oct. 2008.
[60] E. Monari and C. Pasqual, “Fusion of background estimation approaches for motion detection in non-static backgrounds,” in Proc. IEEE Int. Conf. Adv. Video Signal Based Surveillance, London, U.K., Sep. 2007, pp. 347–352.
[61] K. Kim, T. Chalidabhongse, D. Harwood, and L. Davis, “Background modeling and subtraction by codebook construction,” in Proc. IEEE Int. Conf. Image Process., Singapore, Oct. 2004, vol. 5, pp. 3061–3064.
[62] K. Kim, T. Chalidabhongse, D. Harwood, and L. Davis, “Real-time foreground-background segmentation using codebook model,” Real-Time Imag., vol. 11, Special Issue on Video Object Processing, pp. 172–185, Jun. 2005.
[63] M.Wu and X. Peng, “Spatio-temporal context for codebook-based dynamic background subtraction,” Int. J. Electron. Commun., vol. 64, no. 8, pp. 739–747, 2010.
[64] H. Wang and D. Suter, “Background subtraction based on a robust consensus method,” in Proc. IEEE Int. Conf. Pattern Recognit.,Washington, DC, Aug. 2006, pp. 223–226.
[65] H. Wang and D. Suter, “A consensus-based method for tracking: Modelling background scenario and foreground appearance,” Pattern Recognit., vol. 40, pp. 1091–1105, Mar. 2007.
[66] P.-M. Jodoin,M.Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 12, pp. 1758–1763, Dec. 2007.
[67] A. Criminisi, P. Pérez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process., vol. 13, no. 9, pp. 1200–1212, Sep. 2004.
[68] A. Papoulis, Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill, 1984.
[69] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts, and shadows in video streams,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1337–1342, Oct. 2003.
[70] L. Li, W. Huang, I. Gu, and Q. Tian, “Foreground object detection from videos containing complex background,” in Proc. ACMInt. Conf. Multimedia, Berkeley, CA, Nov. 2003, pp. 2–10.
[71] C.-C. Chiu, M.-Y. Ku, and L.-W. Liang, “A robust object segmentation system using a probability-based background extraction algorithm,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 4, pp.
518–528, Apr. 2010.
[72] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. Int. Joint Conf. Artif. Intell., Vancouver, BC, Apr. 1981, pp. 674–679.