Jeff Bier
4/4/2013 8:00 AM EDT
According to MIT neuroscientist Mriganka Sur, half of the human brain is devoted to vision. Why? I'm no neuroscientist, but I imagine that there are two reasons: First, vision is extremely valuable: humans use it constantly, and for an endless variety of tasks, from reading to navigation to creating all manner of objects. Second, vision is a hard problem, considering all of the things that we're able to discern visually--under widely varying and often very challenging conditions, such as glare and low light.
Computer vision enables machines to understand things through visual inputs, sometimes even exceeding the capabilities of human vision. For decades, computer vision has been a niche technology, because computer vision equipment has been large, expensive, and complex to use. But recently, products like the Microsoft Kinect and vision-based automotive safety systems have demonstrated that computer vision can now be deployed even in cost-sensitive applications, and in ways that are easy for non-specialists to use. The term "embedded vision" is often used to refer to the incorporation of visual intelligence into a wide range of systems.
On April 25, 2013 at the San Jose Convention Center, the Embedded Vision Alliance will host the Embedded Vision Summit, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. The Summit will include presentations on the key technologies that are enabling the widespread use of computer vision, including vision sensors, algorithms, processors and development tools. The Summit will also showcase over 20 demonstrations of state-of-the-art embedded vision technologies. This image gallery highlights some of the technologies and applications that will be presented at the Summit.
For more information about the Embedded Vision Summit, or to register, visit www.embedded-vision.com/embedded-vision-summit.
Car and driver
Click on image to enlarge.
In cars, embedded vision is being deployed to reduce accidents. The demo pictured here is a Xilinx reference design that uses four cameras mounted on the exterior of the car. Using the video feeds from these cameras, an embedded vision system can perform numerous safety functions. For example, it can provide the driver with a bird's-eye view of the car and its surroundings to aid in safe parking. And it can detect and read road signs and warn the driver if they're exceeding the speed limit. At the Embedded Vision Summit, Paul Zoratti, Xilinx Automotive Driver Assistance System Architect, will present some of the key challenges and techniques of automotive vision systems. Registration for the Summit is free for qualified engineers, but space is limited.
Computer vision enables machines to understand things through visual inputs, sometimes even exceeding the capabilities of human vision. For decades, computer vision has been a niche technology, because computer vision equipment has been large, expensive, and complex to use. But recently, products like the Microsoft Kinect and vision-based automotive safety systems have demonstrated that computer vision can now be deployed even in cost-sensitive applications, and in ways that are easy for non-specialists to use. The term "embedded vision" is often used to refer to the incorporation of visual intelligence into a wide range of systems.
On April 25, 2013 at the San Jose Convention Center, the Embedded Vision Alliance will host the Embedded Vision Summit, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. The Summit will include presentations on the key technologies that are enabling the widespread use of computer vision, including vision sensors, algorithms, processors and development tools. The Summit will also showcase over 20 demonstrations of state-of-the-art embedded vision technologies. This image gallery highlights some of the technologies and applications that will be presented at the Summit.
For more information about the Embedded Vision Summit, or to register, visit www.embedded-vision.com/embedded-vision-summit.
Car and driver
Click on image to enlarge.
(Source: Jeff Bier)
In cars, embedded vision is being deployed to reduce accidents. The demo pictured here is a Xilinx reference design that uses four cameras mounted on the exterior of the car. Using the video feeds from these cameras, an embedded vision system can perform numerous safety functions. For example, it can provide the driver with a bird's-eye view of the car and its surroundings to aid in safe parking. And it can detect and read road signs and warn the driver if they're exceeding the speed limit. At the Embedded Vision Summit, Paul Zoratti, Xilinx Automotive Driver Assistance System Architect, will present some of the key challenges and techniques of automotive vision systems. Registration for the Summit is free for qualified engineers, but space is limited.
Click on image to enlarge.
(Source: Image courtesy of U.C. Berkeley)
Professor Pieter Abbeel of U.C. Berkeley (keynote speaker at the Embedded Vision Summit), is teaching robots to fold laundry. Though it's a simple (if tedious) task for humans, sorting and folding laundry is extremely difficult to automate, because there's so much unpredictability. Laundry comes out of the dryer in a jumbled mess, with items stuck together, inside out, and crumpled in odd shapes. Abbeel and his team at Berkeley are making big strides in computer vision and machine learning using the laundry project; the advances made there will be applicable to numerous related problems, including robotic surgery. See the abstract of Abbeel's April 25th keynote speech at the Embedded Vision Summit.
Low-cost dev kit
Click on image to enlarge.
Avnet recently introduced the Blackfin Embedded Vision Starter Kit, a low-cost development kit for engineers who want to incorporate embedded vision capabilities into products. The kit is based on the Analog Devices Blackfin BF609 chip, which incorporates dual Blackfin CPU/DSP cores and a specialized Parallel Vision Processor. Priced at $199 (special introductory pricing), the kit also incorporates a high-definition image sensor, lens and LED lighting. The kit will be demonstrated at the Embedded Vision Summit. The day after the Summit, Avnet, Analog Devices and BDTI will present a half-day, hands-on workshop for engineers who want to learn how to use the kit. For details on the workshop, visit Avnet.
Click on image to enlarge.
(Source: Avnet Electronics Marketing)
Avnet recently introduced the Blackfin Embedded Vision Starter Kit, a low-cost development kit for engineers who want to incorporate embedded vision capabilities into products. The kit is based on the Analog Devices Blackfin BF609 chip, which incorporates dual Blackfin CPU/DSP cores and a specialized Parallel Vision Processor. Priced at $199 (special introductory pricing), the kit also incorporates a high-definition image sensor, lens and LED lighting. The kit will be demonstrated at the Embedded Vision Summit. The day after the Summit, Avnet, Analog Devices and BDTI will present a half-day, hands-on workshop for engineers who want to learn how to use the kit. For details on the workshop, visit Avnet.
A good memory for faces
Click on image to enlarge.
Face recognition is often thought of as a technology solely for security and surveillance, but in fact it has many other applications. For example, consumer electronics products that can recognize their users can automatically adjust to each user's stored preferences. One of the key challenges in deploying embedded vision functions like face recognition into consumer applications is the need to make vision subsystems small, inexpensive, energy efficient and durable. CogniVue is an Ottawa-based start-up company that has developed a highly integrated "smart camera" module that packs a CMOS image sensor, memory and a specialized parallel vision processor into roughly a one-inch cube. At the Embedded Vision Summit, Simon Morris, CEO of CogniVue, will present lessons learned from CogniVue's vision system development efforts.
Click on image to enlarge.
Face recognition is often thought of as a technology solely for security and surveillance, but in fact it has many other applications. For example, consumer electronics products that can recognize their users can automatically adjust to each user's stored preferences. One of the key challenges in deploying embedded vision functions like face recognition into consumer applications is the need to make vision subsystems small, inexpensive, energy efficient and durable. CogniVue is an Ottawa-based start-up company that has developed a highly integrated "smart camera" module that packs a CMOS image sensor, memory and a specialized parallel vision processor into roughly a one-inch cube. At the Embedded Vision Summit, Simon Morris, CEO of CogniVue, will present lessons learned from CogniVue's vision system development efforts.
Better photos through algorithms
Click on image to enlarge.
Smartphones have become the cameras that consumers use most often for capturing still images and video. But the thin form factor of smartphones places severe limitations on sensors and lenses, which limits the quality of captured images. "Computational photography" refers to the use of algorithms to improve the quality or otherwise enhance captured images. These algorithms often rely on computer vision techniques, such as identifying common features in successive frames of a panoramic image sequence in order to automatically align and stitch the images together to create a seamless panorama. At the Embedded Vision Summit, CEVA will present a talk on "Using Advanced Algorithms to Bring Vision Capabilities to Power-Constrained Devices", and Qualcomm will present "Heterogeneous Mobile Processing Platforms for Computer Vision Applications." For the complete agenda, click here.
Click on image to enlarge.
(Source: CEVA)
Smartphones have become the cameras that consumers use most often for capturing still images and video. But the thin form factor of smartphones places severe limitations on sensors and lenses, which limits the quality of captured images. "Computational photography" refers to the use of algorithms to improve the quality or otherwise enhance captured images. These algorithms often rely on computer vision techniques, such as identifying common features in successive frames of a panoramic image sequence in order to automatically align and stitch the images together to create a seamless panorama. At the Embedded Vision Summit, CEVA will present a talk on "Using Advanced Algorithms to Bring Vision Capabilities to Power-Constrained Devices", and Qualcomm will present "Heterogeneous Mobile Processing Platforms for Computer Vision Applications." For the complete agenda, click here.
Click on image to enlarge.
(Source: OpenCV)
Improved software tools and libraries are easing the development of new embedded vision products. For example, OpenCV ( www.OpenCV.org) is a widely used open-source computer vision software component library. Originally developed for PCs, OpenCV is increasingly available on embedded processors as well. At the Embedded Vision Summit, Gary Bradski, founder of OpenCV and president of the OpenCV Foundation, will give a presentation highlighting some of the recent developments in OpenCV as well as planned future improvements. Jose Alvarez of Xilinx will talk about ways to accelerate OpenCV-based vision applications using custom parallel processing structures.
Click on image to enlarge.
(Source: BDTI)
Getting started with embedded vision application development can be daunting for the uninitiated. To ease the process, BDTI has created the BDTI OpenCV Executable Demo package, a free package of OpenCV-based demos that run on Windows PC and use a webcam to capture and process real-time video, demonstrating classic computer vision algorithms such as Viola-Jones face recognition. The kit provides a simple user interface that allows users to vary parameters of selected OpenCV algorithms and see in real-time how the parameter changes affect the algorithm's behavior. And for developers who want a quick path to starting OpenCV application development, BDTI offers the BDTI OpenCV Quick-Start Kit, a free virtual machine image that delivers all tools and libraries needed for OpenCV development in a single easy-to-use package. These free tools can be downloaded from the Embedded Vision Alliance.
3-D gets affordable
Click on image to enlarge.
Like many other computer vision technologies, 3D cameras used to be too big and expensive for most potential applications. The Microsoft Kinect showed that this is no longer the case. Since the debut of the Kinect, there has been a surge of R&D investment aimed at creating small, low-cost 3D cameras. At the Embedded Vision Summit, Goksel Dedeoglu of Texas Instruments will present a talk on principles and applications of stereoscopic 3D cameras, and will demonstrate a reference design for a low-cost 3D camera, pictured here. 3D vision is attractive because it simplifies many problems that are difficult with 2D sensors, such as background subtraction. Also at the Summit, Gershom Kutliroff, CTO of Omek, will present on "Why 3D sensors are a game-changer for embedded vision," including a case study of vision-based hand pose estimation for gesture control. For more information or to register for the Embedded Vision Summit, click here.
Click on image to enlarge.
(Source: Texas Instruments and Omek)
Like many other computer vision technologies, 3D cameras used to be too big and expensive for most potential applications. The Microsoft Kinect showed that this is no longer the case. Since the debut of the Kinect, there has been a surge of R&D investment aimed at creating small, low-cost 3D cameras. At the Embedded Vision Summit, Goksel Dedeoglu of Texas Instruments will present a talk on principles and applications of stereoscopic 3D cameras, and will demonstrate a reference design for a low-cost 3D camera, pictured here. 3D vision is attractive because it simplifies many problems that are difficult with 2D sensors, such as background subtraction. Also at the Summit, Gershom Kutliroff, CTO of Omek, will present on "Why 3D sensors are a game-changer for embedded vision," including a case study of vision-based hand pose estimation for gesture control. For more information or to register for the Embedded Vision Summit, click here.