当支付宝(当面付)开始用超声波进行近距离的小数据量通讯时,我就感慨,这真是一个伟大的而又令人敬佩的发明!因为它完全不需要额外的硬件支持!
后来微信将超声波通讯应用到了“雷达找朋友”,腾讯总是善于学习且发现的,也很令人敬佩。
而今天,微软甚至将超声波技术用到了手势识别上!下面的链接对此详细描述!
http://article.yeeyan.org/compare/286069
利用声波实现手势识别
从高中物理课本中的“多普勒效应”我们知道当波源在运动时观察者感受到波的频率是变化的,救护车的鸣笛声就是一个很好的例子,你也许没有想到过利用“多普勒效应”来控制电脑吧。
Tan希望SoundWave能够和其它一些动作识别技术协同工作。他说道SoundWave不用考虑光线的问题而基于视觉的动作识别技术却不行, SoundWave对于一些细微的动作识别不是很在行比如捏手指之类的。”理论上讲全世界有各种各样的识别器,用户不会在乎这些识别器是什么,他们只关心识别器能否帮助他们解决问题“,Tan这样说道。
only sound—thanks to the Doppler Effect, some clever
software, and the built-in speakers and microphone on a laptop.
Desney Tan,
a Microsoft Research principal researcher and member of the SoundWave
team, says the technology can already be used to sense a number of
simple gestures, and with smart phones and laptops starting to include
multiple speakers and microphones, the technology could become even more
sensitive. SoundWave—a collaboration between Microsoft Research and the
University of Washington—will be presented this week in a paper at the 2012 ACM SIGCHI Conference on Human Factors in Computing in Austin, Texas.
The idea for SoundWave emerged last summer, when Desney and others
were working on a project involving using ultrasonic transducers to
create haptic effects, and one researcher noticed a sound wave changing
in a surprising way as he moved his body around. The transducers were
emitting an ultrasonic sound wave that was bouncing off researchers’
bodies, and their movements changed the tone of the sound that was
picked up, and the sound wave they viewed on the back end.
The researchers quickly determined that this could be useful for
gesture sensing. And since many devices already have microphones and
speakers embedded, they experimented to see if they could use those
existing sensors to detect movements. Tan says standard computer
speakers and microphones can operate in the ultrasonic band—beyond what
humans can hear—which means all SoundWave has to do to make its
technology work on your laptop or smart phone is load it up with
SoundWave software.
Chris Harrison, a graduate student at Carnegie Mellon University who
studies sensing for user interfaces, calls SoundWave’s ability to
operate with existing hardware and a software update “a huge win.”
“I think it has some interesting potential,” he says.
The speakers on a computer equipped with SoundWave software emit a
constant ultrasonic tone of between 20 and 22 kilohertz. If nothing in
the immediate environment is moving, the tone the computer’s microphone
hears should also be constant. But if something is moving toward the
computer, that tone will shift to a higher frequency. If it’s moving
away, the tone will shift to a lower frequency.
This happens in predictable patterns, Tan says, so the frequencies
can be analyzed to determine how big the moving object is, how fast it’s
moving, and the direction it’s going. Based on all that, SoundWave can
infer gestures.
The software’s accuracy hovers in the 90 percent range, Tan says, and
there isn’t a noticeable delay between when a user makes a gesture and
the computer’s response. And SoundWave can operate while you’re using
the speakers for other things, too.
So far, the SoundWave team has come up with a range of movements that
its software can understand, including swiping your hand up or down,
moving it toward or away from your body, flexing your limbs, or moving
your entire body closer to or farther away from the computer. With these
gestures, researchers are able to scroll through pages on a computer
screen and control simple Web navigation. Sensing when a user approaches
a computer or walks away from it could be used to automatically wake it
up or put it to sleep, Tan says.
Harrison thinks that having a limited number of gestures is fine,
especially since users will have to memorize them. The SoundWave team
has also used its technology to control a game of Tetris, which, aside
from being fun, provided a good test of the system’s accuracy and speed.
Tan envisions SoundWave working alongside other gesture-sensing
technologies, saying that while it doesn’t face the lighting issues that
vision-based technologies do, it’s not as good at sensing small
gestures like a pinch of the fingers. “Ideally there are lots of sensors
around the world, and the user doesn’t know or care what the sensors
are, they’re just interacting with their tasks,” he says.