图形识别

转自:http://www.kickstarter.com/projects/visionai/vmx-project-computer-vision-for-everyone?ref=category

Webapp for real-time visual object recognition and an API for building vision-aware apps. Let our vision empower your vision.

(Update: January 5th 2014) In response to backer (and potential backer) demand, we will offer a single machine license and linux-binary for a local VMX installation. Local installation will not consume VMX Compute Hours nor require internet access. Optionally, VMX Developers will be able to download a fully-configured virtual machine running VMX on Linux for use inside VirtualBox. More here: Update #5

____________

What if your computer was just a little bit smarter? What if your devices could understand what is going on in their surroundings merely by looking at the world through a camera? VMX gives your computer the ability to see.  VMX could be used to make games more engagingenhance your interactions with your devices, and allow your computer to automate many of your daily chores and responsibilities

The VMX project was designed to bring cutting-edge computer vision technology to a very broad audience: hobbyists, researchers, artists, students, roboticists, engineers, businesses, and entrepreneurs. The VMX project will give you all the tools you need to bring your own creative computer vision projects to life.

You + VMX = A computer vision jedi

VMX allows for a variety of different input formats.  Whether it’s a webcam, a YouTube video, or a map-flyover, if you can render it on your screen, VMX can use it.  This means that whether your ideal application involves processing previously recorded videos, learning from Google Image search, or having a camera watch your refrigerator, the possibilities are endless.

VMX allows for a variety of input formats

Here is a video showing real-time face parts and hand-gesture recognition. It's easy to add your own gestures and customize their performance directly in our GUI.  You'll be amazed at how much time you'll save by using our GUI.

Here is a video showing Southpark Characters being recognized inside the browser.  VMX works on videos, cartoons, and all sorts of different video streams.

 Here is a video showing VMX detecting objects in a Google Earth map fly-over.  Want to count cars from overhead imagery?  Or perhaps count how many pools are in your neighborhood?

VMX in the browser  

In order to make the barrier-of-entry to computer vision as low as possible, we built VMX directly in the browser and made sure that it requires no extra hardware. All you need is a laptop with a webcam and a internet connection. Because browsers such as Chrome and Firefox can read video directly from a webcam, you most likely have all of the required software and hardware. The only thing missing is VMX.

What you'll need for an unforgettable VMX Experience

You won't need a PhD to use Visual AI: Why you’ll love VMX 

VMX gives you a our very-own open-source vision apps as well as all you need to effortlessly build your very own computer vision apps. Our technology is built on top of 10+ years of computer vision research experience acquired from CMU, MIT, and Google (see About The Founders section below). By leaving the hard stuff to us, you will be able to focus on creative uses of computer vision without the headaches of mastering machine learning algorithms or managing expensive computations. You won’t need to be a C++ guru or know anything about statistical machine learning algorithms to start using laboratory-grade computer vision tools for your own creative uses. 

Training at your fingertips

The “killer feature” of VMX is an effortless method for training your own object detectors, directly in the browser. We talked to many aspiring developers and quickly realized that many people’s crazy ideas involve the ability to recognize different (and sometimes quite personal) objects. By waving an object in front of your laptop’s screen, you will be able to train your own object detector in a matter of minutes. 

Creating a new object detector requires drawing a few selection boxes directly over the input video stream and then spending some time in “learning mode.” While you are in learning mode, the detector continues to run in real-time while learning about the object, making it ready for your application in a matter of minutes. You can then save a detector, or “object model,” for later use.

Creating a new object detector is fun and simple!

Running Multiple Object Detectors

You will be able to train multiple detectors for the different objects you care about.  With VMX you can load, save, and manage all of your object models.  You can run multiple detectors in real-time, use the GUI to make them faster or more robust, and most importantly, you can always improve your object detector later by enabling “learning-mode.”  Here is an example of the model library which lets you select pre-trained object models.

VMX Model Library lets you manage a collection of object detectors
Advanced Model Editor

To help you train an object detector in a very difficult scenario (such as one Southpark cartoon character versus another), we built an advanced model editor which lets you visually tweak the learned model.  The Model Editor GUI inside VMX lets you move examples from the positive side to the negative side, and vice-versa.  All you need to know about machine learning is that a “positive” example is what VMX thinks is the object and a “negative” examples is what VMX thinks is not the object.

VMX Advanced Model Editor
VMX “Vision-Aware” Apps

The VMX project will come with a suite of open-source “vision-aware” apps which showcase the capabilities of our technology.  Vision-aware apps allow the computer to perform actions in real-time based on the presence and location of detected objects.  Most importantly, VMX apps run directly inside your browser!  We decided to make the Kickstarter VMX Apps open-source and wrote them in one of the most popular programming languages on the internet, Javascript.  You’ll be able to analyze, modify, and re-combine our VMX apps as a starting point for creating your very own vision-aware app.

If successfully funded, backers will get access to the the following VMX Apps (as featured in our video):

VMX Tweet & Greet App: If X is detected, send a Tweet to Y. If you want to know when your dog has jumped on the sofa, then this app is for you.

VMX Pong App: Use real-world objects as controllers. Challenge your friends. If you’re serious about emerging technologies for gaming, then this app is for you.

Below is a video of an early VMX prototype used to play Pong.

VMX Counter App: Do you need to know how many bottles are on the counter?  Or how many cars are parked in your company’s parking lot? Then this app is for you.

Below are a few other VMX apps we are working on.  We’ll let you know when they are ready!

VMX HandPlay App: Tired of looking for your TV remote? Do you want to use hand gestures to control media playback? This app will show you how to pimp out your living room with VMX technology.

VMX API and vision-as-a-service

By running the most expensive computations on our servers, you will be able to use VMX technology on a variety of different computers and hand-held devices, as long as each device has a camera, an internet connection.  NOTE: As of January 5th, 2014 we also added the ability for $100 backers to download a standalone executable which will run locally on their computers.

There are two levels of VMX APIs

- a jQuery-inspired “high-level” javascript API that runs inside the browser and has convenient abstractions for you to leverage the most common use cases (things entering, leaving, or moving around the video stream), as well as programmatically access the VMX environment.

- a RESTful API (which the javascript API and VMX environment rely on) that allows you to use your object detectors outside the browser in any language that can make an HTTP request.  With the RESTful API you can create, improve, and use your object detectors with an extremely high degree of freedom, in the programming language of your choosing.

VMX Cookbook will be provided to developers that documents the APIs, discusses the motivation behind the API design, and gives you code example “recipes“ so you can get the most done with the least amount of effort.

Why Kickstarter?

We believe in the power of decentralized systems and feel that Kickstarter is an ideal platform to deliver our technology to a broad base of early adopters and technology enthusiasts.  We are developers, have always been developers, and always will be developers.  We have witnessed amazing things built with computer vision technology and want to let the whole world experience computer vision without the typical pains of using laboratory-grade technology.  By supporting this Kickstarter campaign, you are receiving more than just an awesome product and service.  You are backing our dream of building a foundation for the masses to unleash their creativity.  In order to deliver a high-quality ad-free product, we need to fully focus on VMX development without working on side projects to account for the cloud computing bills inherent in this project.

Our Kickstarter Rewards

Our main reward is a Kickstarter-exclusive early-access to the VMX webapp.  If we reach our kickstarter goal of $100K, we plan on a public release in the Summer of 2014.  Each “VMX Developer” will receive early-access to VMX technology and each backer will receive a heavy discount on the VMX service, as measured in terms of VMX Compute Hours (see note below).

If funding is successful, when we launch VMX compute hours will be $1/hr.  By contributing now, you not only receive significant discounts on your VMX Compute Hours, but there is no cost associated with VMX Developer access (at the $100+ tiers), which allows you to list your apps in our directory, among other benefits.

(Update: January 14th 2014) We will give 10% of Kickstarter generated funds to high school students and clubs in the form of software licenses. More here: Update #8


你可能感兴趣的:(图形识别)