在网络上看到了这个用来演示一种新的物体跟踪的算法的视频,它是Zdenek Kalal博士论文里的一部分。Zdenek Kalal是英国萨里大学的一个捷克学生。他演示的是他的神奇的精确定位系统,这个系统几乎可以跟踪镜头里的任何物体,只要你能看见它,并把它选中。它能做很多神情的事情。在这个视频中,他演示了通过摄像机拍摄他的手指、把他的手指选做目标。系统于是就能精确的跟踪他的手指的动作。更令人惊奇的是,这个系统能够通过分析物体的运动来完善跟踪算法。你能在很短的时间里教会它跟踪你的手指、面孔或在高速公路上狂颠的轿车。有了这套系统,我们几乎真的可以实现”Minority Report“那样的人机界面。就像微软Xbox的Kinect那样,而这个效果更好,更可贵的是Zdenek Kalal本人已经开源代码并提供测试程序。 
Kalal有12个视频来演示他的这套算法都能做什么。只要你有一个好的摄像头,把这个软件装到计算机上、平板电脑上或手机里,它就能精确的定位跟踪你的前额上的一个点、你的指尖、或你的眼睛。你把摄像头放到门外,它就能自动识别是你认识的人来了,或警告你这是个陌生人。人们不用通过手就能简单的操控计算机。这项技术应用前景广泛。 

你可以从萨里大学的网站找到这个程序的代码,它是免费的。Kalal被授予了“Technology Everywhere”奖学金作为嘉奖。 http://info.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html 
Zdenek Kalal的视频演示: 

产品介绍 
Key Features
 

  • Input: video stream from single monocular camera, bounding box defining the object
  • Output: object location in the stream, object model
  • Implementation: Matlab + C, single thread, no GPU
  • No offline training stage
  • Real-time performance on QVGA video stream
  • Dependence on OpenCV library(single function)
  • Ported to Windows, Mac OS X and Linux
  • Illumination invariant features

Free Version(免费源码可从Github下载) 
TLD can be downloaded for testing in a chosen application. We provide a  precompiled demo (Windows) and a   source code that is released under GPL version 3.0. In short, it means that any distributed project that includes or links any portion of TLD source code has to be released with the source code under the GPL version 3.0 license or later. 
Commercial Version 
A license has to be purchased for using TLD in a commercial project. The licencing is managed by the IP owner, the University of Surrey and the fee is subject to negotiation. Please contact the   University of Surreyfor further information. 
More Information 

  • High-level description of TLD
  • Components of TLD
  • Learning component of TLD
  • Application of TLD tracker to faces
  • Detailed description is in the following papers: ICCV'09 (w), CVPR'10, ICIP'10,ICPR'10
  • Many technical questions (e.g. installation) are being discussed in the followingdiscussion group.