How to make ROS and Opencv work together

http://dasl.mem.drexel.edu/wiki/index.php/How_to_make_ROS_and_Opencv_work_together

How to make ROS and Opencv work together

Jump to: navigation, search

Introduction

Last two tutorials we learned how to control the quadrotor ARDRONE using ROS and also to access the frontal and bottom camera of the robot.Since we have access to vision of the quadrotor we shall learn how to use OpenCV hence opening the doors of computer vision. OpenCV is a library of programming functions for real time computer vision. It makes doing image manipulations easier and also help to develop complex vision based algorithms.
I recommend you to go through the basics of OpenCv by following this link http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/opencv-intro.html. Also if you really interested in taking your computer vision knowledge to the next level,you should start reading this book http://www.cse.iitk.ac.in/users/vision/dipakmj/papers/OReilly%20Learning%20OpenCV.pdf.

ROS and OpenCV

When you download ROS it comes along with OpenCV library to check this open up a terminal and enter:

  roscd vision_opencv

This should take you to folder of the package vision_opencv or else you have to download the OpenCV stack from http://www.ros.org/wiki/vision_opencv .

Now lets write a simple program to do canny edge detection on a video feed.In this case Lets take the video feed from the ARDRONE.

Create a package

This part you will create a ros package with the required dependencies.Dependencies can be defined as other packages which your package is dependent on.To have more info on package creation please have a look at http://www.ros.org/wiki/roscreate.Note: Make sure you create all the packages in your ROS working directory which you would have created in your home folder while installing ROS Step 1:


Open up a new terminal and enter :

 roscreate-pkg roscv sensor_msgs opencv2 cv_bridge roscpp std_msgs image_transport turtlesim ardrone_brown std_srvs


Step 2: Go to "src" folder of the newly created package and create a file named "simplecanny.cpp" and in that file past the following code.


#include <ros/ros.h>
#include <stdio.h>
#include <iostream>
#include "std_msgs/String.h"
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>     //make sure to include the relevant headerfiles
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <cv_bridge/CvBridge.h>
#include <cvaux.h>
#include<math.h>
#include <cxcore.h>
#include <highgui.h>
 
/*here is a simple program which demonstrates the use of ros and opencv to do image manipulations on video streams given out as image topics from the monocular vision
of robots,here the device used is a ardrone(quad-rotor).*/
 
using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;
 
static const char WINDOW[] = "Image window";
 
class simplecanny
{
  ros::NodeHandle nh_;
  ros::NodeHandle n;
 ros::Publisher pub ;
  image_transport::ImageTransport it_;    
  image_transport::Subscriber image_sub_; //image subscriber 
  image_transport::Publisher image_pub_; //image publisher(we subscribe to ardrone image_raw)
  std_msgs::String msg;
public:
 simplecanny()
    : it_(nh_)
  {
 
     image_sub_ = it_.subscribe("/ardrone/image_raw", 1, &simplecanny::imageCb, this);
     image_pub_= it_.advertise("/arcv/Image",1);
 
 
  }
 
  ~simplecanny()
  {
    cv::destroyWindow(WINDOW);
  }
 
  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {
 
     sensor_msgs::CvBridge bridge;//we need this object bridge for facilitating conversion from ros-img to opencv
	IplImage* img = bridge.imgMsgToCv(msg,"bgr8");  //image being converted from ros to opencv using cvbridge
      IplImage* out1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );   //make sure to feed the image(img) data to the parameters necessary for canny edge output 
      IplImage* gray_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); 
      IplImage* canny_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
      IplImage* gray_out1=cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
      IplImage* img1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 ); 
      cvCvtColor(img, gray_out, CV_BGR2GRAY);
      cvSmooth(gray_out, gray_out, CV_GAUSSIAN, 9, 9); 
      cvCanny( gray_out, canny_out, 50, 125, 3 );
      cvCvtColor(canny_out ,gray_out1, CV_GRAY2BGR);
      cvShowImage( "ARDRONE FEED",img);
      cvShowImage( " CANNY EDGE DETECTION ",gray_out1);
      cvWaitKey(2);   
 
}
};
 
 
 
 
int main(int argc, char** argv)
{
  ros::init(argc, argv, "simple_canny");
  simplecanny ic;
  ros::spin();
 
  return 0;
}

Note : Save and close the file


Step 3: Open up the file named "CMakeLists.txt" and add the following lines.


  rosbuild_add_executable(roscv src/simplecanny.cpp)

NOTE: Save and close the file
Step 4:Open up a terminal and enter(one after the other):

 cmake .
 rosmake



(Make sure you get 0 failures for rosmake)


Step 5:Now we are going to test our code with the ARDRONE.Turn on the ARDRONE and connect to it using WiFi.Now enter the following commands one after the other :

roscore  

rosrun ardrone_brown ardrone_driver 

rosrun roscv roscv 



If everything works well you will be seeing two different windows and one window would be a resultant of the cannyedge detection. My threshold for this operation was between 50 and 125. You can further play with it according to the needs of the environment.I am talking about values of parameters in cvcanny(which should be found in simplecanny.cpp)

  • Log in
  • Page
  • Discussion
  • Read
  • View source
  • View history
  • Main page
  • Community portal
  • Current events
  • Recent changes
  • Random page
  • Help
Toolbox
  • What links here
  • Related changes
  • Special pages
  • Printable version

你可能感兴趣的:(How to make ROS and Opencv work together)