机械臂抓取实验笔记总结

要求配置环境:

  • PC笔记本端:内存16G及以上,磁盘200G及以上,GPU 英伟达3060以上/6-8G
  • linux ubuntu 20.04
  • ROS noetic发行版
  • Anaconda虚拟环境中安装python相关的环境
  • IDE推荐VSCode配置C++/Python
  • pytorch框架安装python 包:深度学习检测目标并定位
  • Realsense d435i的python API:获取相机的三维信息
  • Cuda:11.4.120; Cudnn:8.2.2.:深度学习GPU推理加速库

Linux和Windows查看CUDA和cuDNN版本 - 百度文库

ROS常用发布命令:

  • moveit配置助手
roslaunch moveit_setup_assistant setup_assistant.launch

查看当前topic的frame_id命令:

# /livox/imu 为话题名
rostopic echo /livox/imu | grep frame_id

参考资料:

  • Xarm机械臂官网:Download-xArm– We make affordable robots for all.
  • ROS-WIKI: xarm - ROS Wiki
  • GITHUB: https://github.com/xArm-Developer/xarm_ros
  • 中文xarm资料:链接:https://pan.baidu.com/s/1VAWCOr9rEeL_DXz-JBQBzA 
    提取码:osgj
  • realsense D435i:Stereo Depth Camera D400

碰撞检测测试代码:

  • 启动相机节点获取点云数据
roslaunch realsense2_camera demo_pointcloud.launch
  • 启动moveit!
roslaunch xarm7_moveit_config demo.launch
  • 查看坐标转换
rosrun rqt_tf_tree rqt_tf_tree 

基于眼固定

机械臂抓取实验笔记总结_第1张图片


ROS连接真实XArm机器人:

配置主机静态IP(linux系统)


要连接真实机器人,首先当然得配置好电脑得静态ip,当然一根优秀的网线也必不可少。
首先,编辑连接。

机械臂抓取实验笔记总结_第2张图片

 待调试的机器人控制器ip编号为192.168.1.208,因此,设定以太网的IP区段在192.168.1.*即可(不能重);

控制器接线要求:

由于XArm配备了直流控制器,需要外接24V直流电源(通过转换开关实现);

机器人和控制器接线图:

机械臂抓取实验笔记总结_第3张图片

数字IO接线要求: 

EI1,END--EI2,END两组为紧急停止端子,SI0,ENF--SI1,END两组为防护停止端子;

将EI2,END接上急停按钮,其他两两导线对接。

通信:

通信机制:以太网

通信协议:TCP/IP

方式:电脑和控制器通过网线连接到各自的网口即可。

操作规范流程:

  • 开机:

1 控制器上电前确保机械臂接线准确,网线连上电脑;

2 上电,控制器响一声红灯亮-->响三声代表连上以太网;

3 拉起急停开关,启动程序,工作。

  • 关机

1 关闭所有运行的程序,按下急停开关(机械臂断电致动);

2 长按控制器上的黑色按钮,关机,断电。


moveit测试真实机械臂:

代码流程:

启动硬件驱动代码和ros_control 的launch文件:

roslaunch xarm7_gripper_moveit_config realMove_exec.launch robot_ip:=192.168.1.208

运行示例代码:(运行前首先确保在demo.launch里调试过不会碰撞空间障碍物)

rosrun xarm7_moveit_config moveit_ik_xarm_1.py


手眼标定流程:

注意:该流程负责确定相机的位姿到末端执行器固定的变换关系,即获取一组参数。

由于添加了相机和夹持器在末端需要先设定Xarm Studio中的TCP负载为相机和机械爪,

  • 运行launch文件:

roslaunch d435i_xarm_setup d435i_xarm_auto_calib.launch robot_dof:=your_xArm_DOF robot_ip:=your_xArm_IP
  • 下载`aruco Marker`标签:

链接:https://chev.me/arucogen/,需要保存`marker ID` and `marker size`,在上述launch文件中修改相应参数。具体参考官方:https://github.com/IFL-CAMP/easy_handeye#calibration。

(标定结果保存在`~/.ros/easy_handeye`路径下;如果是眼固定则保存在“xarm_vision/d435i_xarm_setup/config/”路径下)

  • Vision Guided Grasping Demo:

请注意,它将使用前面提到的示例手眼校准结果,您可以在publish_handeye_tf.launch进行更改。节点程序源码参考:d435i_xarm_setup/src/ findobj_grasp_moveit_planner.cpp。

(参考http://introlab.github.io/find-object/)

roslaunch d435i_xarm_setup d435i_findobj2d_xarm_moveit_planner.launch robot_dof:=your_xArm_DOF robot_ip:=your_xArm_IP

If target object can be properly detected, to run the Grasping node:

rosrun d435i_xarm_setup findobj2d_grasp_moveit

代码中识别目标名称为“object_1”,对应1.png/objects目录,用户可以在“find_object_2d”GUI中添加自己的目标,然后修改source_frame里面的代码,实现自定义应用。

结果:机械臂抓取实验笔记总结_第4张图片


抓取火龙果流程:

注意:检查工作空间没有障碍物存在且连线正常,启动命令顺序不能颠倒,否则无法正常通信

  • 启动launch文件:(包含real robot 的硬件驱动程序和手眼标定参数的publish)

开启机械臂并与pc端通信

roslaunch d435i_xarm_setup fruit_grasp.launch add_gripper:=true
  • 启动深度相机及yolov4定位检测程序:

(包含: rosnode、rostopic、tf_broadcast和深度学习算法)

相机节点与pc端通信

python Pitaya_point_sparam.py

注意:需要在vscode终端中启动,包含同级目录文件,在ubuntu终端没有测试过。

  • 启动python/c++演示:(一次只能运行一条命令)
# 关节空间自由曲线:
rosrun d435i_xarm_setup my_test
# 笛卡尔空间直线轨迹规划()
rosrun d435i_xarm_setup tf_test1.py
rosrun d435i_xarm_setup tf_test2.py

相关演示文件在 xarm7_moveit_config/scripts or src文件夹中。


基于Octomap建图的碰撞检测及抓取

octomap_server - ROS Wiki

通过ROS控制真实机械臂(18) --- 构建 octocmap 用于碰撞检测和避障 - 古月居

Octomap地图构建:

执行以下代码

# 发布点云数据
roslaunch realsense2_camera demo_pointcloud.launch
#启动真实机械手臂
roslaunch xarm7_gripper_moveit_config realMove_exec.launch 
#启动重映射将点云话题发布给octomap_server构建拼接的全局点云地图,
#话题名为:/octomap_point_cloud_centers
roslaunch xarm7_gripper_moveit_config make_pointcloud.launch


# 开始控制机器人建图

#最后,保存全局点云消息->XXX.pcd文件
rosrun xarm7_gripper_moveit_config pcl2pcd

Rviz查看点云地图在仿真环境的分布:

#启动虚拟环境
roslaunch xarm7_gripper_moveit_config demo.launch

#发布Octomap
rosrun xarm7_gripper_moveit_config pub_octomap

效果图:

机械臂抓取实验笔记总结_第5张图片

点云处理(滤波、分割):

由于设备精度,操作者经验环境因素带来的影响,点云存在离散点比较常见,这会导致生成的Octomap出现孤立碰撞点,因此需要对点云文件进行滤波,参考

PCL经典代码赏析四:点云滤波_Neverland_LY的博客-CSDN博客_pcl滤波算法

ROS 八叉树地图构建 - 给 octomap_server 增加半径滤波器!_登龙的博客-CSDN博客

如何在ROS中使用PCL—数据格式(1) - Being_young - 博客园

真实环境加载数据

xarm7_gripper_moveit_config传感器配置src/xarm_ros/xarm7_gripper_moveit_config/config/sensors_3d.yaml文件中,将话题订阅为全局点云话题

#点云

sensors:

- sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater

# 话题名同src/xarm_ros/xarm7_gripper_moveit_config/src/pub_octomap.cpp

point_cloud_topic: /pointcloud/output_1

max_range: 2

frame_subsample: 4

point_subsample: 4

padding_offset: 0.05

padding_scale: 5.0

max_update_rate: 10

filtered_cloud_topic: filtered_cloud

再执行代码:

1 roscore
2 运行src/xarm_ros/xarm_vision/d435i_xarm_setup/scripts/multiyolov5-seg_v1/detect_2.py

3 roslaunch d435i_xarm_setup fruit_grasp.launch

4 rosrun xarm7_gripper_moveit_config pub_octomap

5 rosrun  d435i_xarm_setup tf_test2.py

试验结果:


视觉伺服抓取任务

执行代码:

roscore

#2D计算机视觉
/home/robot/anaconda3/envs/pytorch/bin/python /home/robot/catkin_ws/src/xarm_ros/xarm_vision/d435i_xarm_setup/scripts/multiyolov5-seg_v1/detect_2.py

#机器人和视觉伺服
roslaunch mfvs demo_pbvs.launch

#发布点云八叉图,需要在RVIZ中开启pointcloud2;
rosrun xarm7_gripper_moveit_config pub_octomap

#启动例程
#Python
rosrun mfvs pbvs_sub.py

问题一

xarm sdk路径问题:

source ~/.bashrc

运行代码可执行权限 chmod +x *.py

导入模块前引用:

sys.path.append(os.path.join(os.path.dirname(__file__), '../../'))

该句解决xarm sdk路径问题,同时需要指明目标文件的__init__.py

问题二

usr/bin/ld: CMakeFiles/mfvs_pbvs_sub.dir/src/pbvs_sub.cpp.o: in function `RobotPlanning(mfvs::CartVelCmd_ >)':
pbvs_sub.cpp:(.text+0x1ff): undefined reference to `XArmAPI::set_gripper_position(float, bool, float, bool)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x228): undefined reference to `XArmAPI::vc_set_cartesian_velocity(float*, bool, float)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x269): undefined reference to `XArmAPI::set_gripper_position(float, bool, float, bool)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x29d): undefined reference to `XArmAPI::vc_set_cartesian_velocity(float*, bool, float)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x2be): undefined reference to `XArmAPI::set_mode(int)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x2d5): undefined reference to `XArmAPI::set_state(int)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x31f): undefined reference to `XArmAPI::set_servo_angle(float*, float, float, float, bool, float, float)'
/usr/bin/ld: pbvs_sub.cpp:(.text+0x36f): undefined reference to `XArmAPI::set_servo_angle(float*, float, float, float, bool, float, float)'
。。。
/---------------------------------------------

解决方法:

首先编译安装工作空间下的xarm_ros包中的 C++ sdk:

即,make xarm & sudo make install;

机械臂抓取实验笔记总结_第6张图片

再在mfvs下的CmakeList.txt中添加目标文件(.cpp)的xarm库并添加路径(因为xarm库本身没有makefile设置路径)

即,添加如下:

set(xarm_INCLUDE_DIRS "/usr/include/xarm")
set(xarm_LIBRARIES "/usr/lib/libxarm.so")

#-----------------------------------------------
add_executable(${PROJECT_NAME}_pbvs_sub
  src/pbvs_sub.cpp
)

add_dependencies(${PROJECT_NAME}_pbvs_sub
  ${catkin_EXPORTED_TARGETS}
  ${${PROJECT_NAME}_EXPORTED_TARGETS}
  arm_vs_generate_messages_cpp
)


target_link_libraries(${PROJECT_NAME}_pbvs_sub
  ${catkin_LIBRARIES}
  ${roscpp_LIBRARIES}
  ${xarm_LIBRARIES}        #这里!!!!
)


 

引用代码段:

/home/lizhihao/catkin_ws/src/mfvs/launch/demo_pbvs.launch

指令:roslaunch mfvs demo_pbvs.launch



    
    
    
  

    
    
    
    
    
    
    
    
    
    
    
    
    
    

    
    

    

    
    

    
    
    
        
        
        
        
    

    
    
    
    
    

/home/lizhihao/catkin_ws/src/mfvs/src/pbvs.cpp

指令:上述代码包含其中

/**
 * This node takes in desired camera pose and current camera pose and uses VISP code to
 * calculate the end effector velocity needed to minized the error between two poses mentioned.
 */
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
// ViSP
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

// msgs
#include "std_msgs/String.h"
// #include "sensor_msgs/image_encodings.h"
#include "geometry_msgs/Pose.h"
#include "geometry_msgs/Twist.h"
#include "geometry_msgs/TransformStamped.h"
#include "mfvs/CartVelCmd.h"
#include "mfvs/BBox.h"

#define PI 3.14159265

class VisualServoing{
    private:
        ros::NodeHandle nh;

        tf2_ros::Buffer tf_buffer;
        tf2_ros::TransformListener* tf_listener;
        geometry_msgs::TransformStamped tf_transform;
        ros::Publisher cart_vel_pub;
        // ros::Subscriber box_sub;

        //parameters
        double pbvs_control_loop_hz;
        double pbvs_control_law_gain_lambda;
        double pbvs_control_deadband_error;

        double xyz_vel_limit;
        double rpy_vel_limit;

        std::string desired_camera_frame;
        std::string current_camera_frame;
        std::string robot_base_frame;
        std::string control_input_topic;

        mfvs::CartVelCmd control_input;
        bool detection;

    public:
        VisualServoing(ros::NodeHandle node_handle){
            nh = node_handle;
            VisualServoing::getParametersValues();

            tf_listener = new tf2_ros::TransformListener(tf_buffer);
            cart_vel_pub = nh.advertise(control_input_topic, 1);
            // box_sub = nh.subscribe("/tracked_obj_bbox", 1, &VisualServoing::callback_box, this);

            VisualServoing::pbvs();
        }
        
        // void callback_box(const arm_vs::BBox::ConstPtr& bbox)
        // {
        //     detection = bbox->detection;
        // }

        void getParametersValues(){
            nh.param("pbvs_control_loop_hz", pbvs_control_loop_hz, 60.0);
            nh.param("pbvs_control_law_gain_lambda", pbvs_control_law_gain_lambda, 1.0);
            nh.param("pbvs_control_deadband_error", pbvs_control_deadband_error, 0.0001);

            nh.param("xyz_vel_limit", xyz_vel_limit, 0.18);
            nh.param("rpy_vel_limit", rpy_vel_limit, 0.18);

            nh.param("desired_camera_frame", desired_camera_frame, "/desired_cam_frame");
            nh.param("current_camera_frame", current_camera_frame, "/camera_rgb_optical_frame");
            nh.param("robot_base_frame", robot_base_frame, "robot_base_frame");
            nh.param("control_input_topic", control_input_topic, "/control_input");
        }

        void getTwistVectorBodyFrame(Eigen::VectorXd& Vb, Eigen::VectorXd Vc, Eigen::Matrix4d bMc){
            // calculate adjoint using input transformation
            // [  R  0]
            // [[tR] R]
            Eigen::Matrix3d bRc = bMc.block<3,3>(0,0); // rotation
            Eigen::Vector3d btc = bMc.block<3,1>(0,3); // translation

            // skew symmetric [t]
            Eigen::Matrix3d btc_s;
            btc_s << 0, -btc(2), btc(1),
                    btc(2), 0, -btc(0),
                    -btc(1), btc(0), 0;

            // Adjoint
            Eigen::MatrixXd bAdc(6, 6);
            bAdc << bRc, Eigen::Matrix3d::Zero(), 
                    btc_s*bRc, bRc;

            // calculate twist in body frame using adjoint and twist in camera frame
            Vb = bAdc * Vc;

            // std::stringstream ss;
            // ss.str(std::string());
            // ss << "Vb:\n" << Vb << "\n";
            // ROS_INFO("\n%s", ss.str().c_str());
        }

        void fixQuat(double &qx, double &qy, double &qz, double &qw){
            double norm = sqrt(qx*qx + qy*qy + qz*qz + qw*qw);
            qx = qx/norm;
            qy = qy/norm;
            qz = qz/norm;
            qw = qw/norm;

            if(2*acos(qw) > PI)
            {
                qx = -qx;
                qy = -qy;
                qz = -qz;
                qw = -qw;
            }
        }

        void limitLinVel(double &v){
            v = std::min(v, xyz_vel_limit);
            v = std::max(v, -xyz_vel_limit);
        }

        void limitRotVel(double &w){
            w = std::min(w, rpy_vel_limit);
            w = std::max(w, -rpy_vel_limit);
        }

        void pbvs(){
            vpHomogeneousMatrix cdMc; // cdMc is the result of a pose estimation; cd: desired camera frame, c:current camera frame

            // Creation of the current visual feature s = (c*_t_c, ThetaU)
            vpFeatureTranslation s_t(vpFeatureTranslation::cdMc);
            vpFeatureThetaU s_tu(vpFeatureThetaU::cdRc);

            // Set the initial values of the current visual feature s = (c*_t_c, ThetaU)
            s_t.buildFrom(cdMc);
            s_tu.buildFrom(cdMc);

            // Build the desired visual feature s* = (0,0)
            vpFeatureTranslation s_star_t(vpFeatureTranslation::cdMc); // Default initialization to zero
            vpFeatureThetaU s_star_tu(vpFeatureThetaU::cdRc); // Default initialization to zero
            vpColVector v; // Camera velocity
            double error = 1.0;  // Task error

            // Creation of the visual servo task.
            vpServo task;
            // Visual servo task initialization
            // - Camera is monted on the robot end-effector and velocities are
            //   computed in the camera frame
            task.setServo(vpServo::EYEINHAND_CAMERA);
            // - Interaction matrix is computed with the current visual features s
            task.setInteractionMatrixType(vpServo::CURRENT);

            // vpAdaptiveGain lambda;
            // lambda.initStandard(4, 0.4, 30);

            // - Set the contant gain to 1
            task.setLambda(pbvs_control_law_gain_lambda);
            // - Add current and desired translation feature
            task.addFeature(s_t, s_star_t);
            // - Add current and desired ThetaU feature for the rotation
            task.addFeature(s_tu, s_star_tu);
            // Visual servoing loop. The objective is here to update the visual
            // features s = (c*_t_c, ThetaU), compute the control law and apply
            // it to the robot

            ros::Rate rate(pbvs_control_loop_hz);
            while(nh.ok()){
                try{
                    // lookup desired camera from and current camera frame transform
                    // tf_buffer.canTransform(desired_camera_frame, current_camera_frame, ros::Time(0), ros::Duration(2.0));
                 
                    tf_transform = tf_buffer.lookupTransform(desired_camera_frame, current_camera_frame, ros::Time::now(), ros::Duration(0.2));
               
                    double t_x = tf_transform.transform.translation.x;
                    double t_y = tf_transform.transform.translation.y;
                    double t_z = tf_transform.transform.translation.z;
                    vpTranslationVector trans_vec;
                    trans_vec.buildFrom(t_x, t_y, t_z);

                    double q_x = tf_transform.transform.rotation.x;
                    double q_y = tf_transform.transform.rotation.y;
                    double q_z = tf_transform.transform.rotation.z;
                    double q_w = tf_transform.transform.rotation.w;
                    fixQuat(q_x, q_y, q_z, q_w);
                    vpQuaternionVector quat_vec;
                    quat_vec.buildFrom(q_x, q_y, q_z, q_w);

                    // vpHomogeneousMatrix cdMc;
                    cdMc.buildFrom(trans_vec, quat_vec);

                    // PBVS
                    // ... cdMc is here the result of a pose estimation
                    // Update the current visual feature s
                    s_t.buildFrom(cdMc);  // Update translation visual feature
                    s_tu.buildFrom(cdMc); // Update ThetaU visual feature
                    v = task.computeControlLaw(); // Compute camera velocity skew
                    error = (task.getError()).sumSquare(); // error = s^2 - s_star^2

                    // convert twist in camera frame to body frame
                    // rearranging twist from [v w] to [w v]
                    Eigen::VectorXd Vc(6);
                    Vc << v[3], v[4], v[5], v[0], v[1], v[2];

                    // lookup desired camera from and robot body frame transform
                    Eigen::VectorXd Vb(6);
                    tf_transform = tf_buffer.lookupTransform(robot_base_frame, current_camera_frame, ros::Time(0), ros::Duration(3.0));
                    getTwistVectorBodyFrame(Vb, Vc, tf2::transformToEigen(tf_transform).matrix());

                    // limit linear and rotational velocity
                    limitLinVel(Vb[3]);
                    limitLinVel(Vb[4]);
                    limitLinVel(Vb[5]);
                    limitRotVel(Vb[0]);
                    limitRotVel(Vb[1]);
                    limitRotVel(Vb[2]);

                    // command end effector twist to robot
                    if(error >= pbvs_control_deadband_error){
                        control_input.velocity.data = {
                            static_cast(Vb[3]), static_cast(Vb[4]), static_cast(Vb[5]), 
                            static_cast(Vb[0]), static_cast(Vb[1]), static_cast(Vb[2])
                        };
                    }
                    else{
                        control_input.velocity.data = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
                    }
                }
                catch(tf2::TransformException ex){
                    ROS_ERROR("%s", ex.what());
                    // control_input.velocity.data = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
                }

                cart_vel_pub.publish(control_input);

                // std::stringstream ss0;
                // ss0.str(std::string());
                // ss0 << "body frame v[v w]:\n";
                // ss0 << control_input.velocity.data[0] << " \n";
                // ss0 << control_input.velocity.data[1] << " \n";
                // ss0 << control_input.velocity.data[2] << " \n";
                // ss0 << control_input.velocity.data[3] << " \n";
                // ss0 << control_input.velocity.data[4] << " \n";
                // ss0 << control_input.velocity.data[5] << " \n";
                // ss0 << "\n";
                // ROS_INFO("\n%s", ss0.str().c_str());

                rate.sleep();
            }
            task.kill();
        }
};

int main(int argc, char** argv){
    ros::init(argc, argv, "visual_servoing_pub");
    ros::NodeHandle nh("~");
    VisualServoing vs(nh);
    ros::spin();
    return 0;
}

/home/lizhihao/catkin_ws/src/xarm_ros/xarm_vision/d435i_xarm_setup/scripts/yolox_demo/Pitaya_point_catch.py

指令:(pytorch_1虚拟环境终端下)python Pitaya_point_catch.py

# -*-coding: utf-8 -*-
from turtle import pu
from numpy.core.fromnumeric import shape
import pyrealsense2 as rs
# import json
# import sys
# import os
import cv2

import rospy
# import geometry_msgs.msg

# 通过下列两句可以解决相对路径问题
import os, sys
os.chdir(sys.path[0])

from d435i_xarm_setup.msg import fruit

import tf.broadcaster as br
import tf.listener    as li
import tf.transformations as trf


br1 = br.TransformBroadcaster()


# initial realsense configuration
pipeline = rs.pipeline()  #定义流程pipeline
config = rs.config()   #定义配置config
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)  #配置depth流
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)   #配置color流
profile = pipeline.start(config)  #流程开始
align_to = rs.stream.color  #与color流对齐
align = rs.align(align_to)




def get_aligned_images():
    frames = pipeline.wait_for_frames()  #等待获取图像帧
    aligned_frames = align.process(frames)  #获取对齐帧
    aligned_depth_frame = aligned_frames.get_depth_frame()  #获取对齐帧中的depth帧
    color_frame = aligned_frames.get_color_frame()   #获取对齐帧中的color帧
    depth_intrin = aligned_depth_frame.profile.as_video_stream_profile().intrinsics  #获取深度参数(像素坐标系转相机坐标系会用到)
    color_image = np.asanyarray(color_frame.get_data())  # RGB图
    
    #返回相机内参、深度参数、彩色图、深度图、齐帧中的depth帧
    return depth_intrin, color_image ,aligned_depth_frame




if __name__ == '__main__':
    import time
    import sys
    import cv2
    import numpy as np
    from PIL import Image
    from yolo import YOLO

    rospy.init_node("pub_fruit_position")
    
    # real_stamped = geometry_msgs.msg.PoseStamped()
    # real_pose = geometry_msgs.msg.Pose()
    
    # pub_1 = rospy.Publisher("one_position", fruit, queue_size= 10)
    
    f = fruit()
    f1 = f.fruit_stamped

    # real_stamped.header.frame_id = "camera_color_optical_frame"

    # new_stamped = geometry_msgs.msg.PoseStamped()
    
    # try:
    #     li.TransformerROS.transformPoint("camera_color_optical_frame", real_stamped, new_stamped)
    # except:
    #     # rospy.ERROR("retry!!!!!!!!!!!!!!")
    #     rospy.sleep(1)
    camera_coordinate = []

    
    Is_true = 1
    while Is_true:
        depth_intrin, color_image, aligned_depth_frame = get_aligned_images() #获取对齐的图像与相机内参
        # 定义需要得到真实三维信息的像素点(x, y),本例程以中心点为例
        # ref, frame = cap.read()
        
       
        frame = color_image
      
        yolo = YOLO()
        # # # 格式转变,BGRtoRGB
        frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
        
        # # # 转变成Image
        frame = Image.fromarray(np.uint8(frame))
        
        # # 进行检测
        fruit_dict, img = yolo.detect_image(frame)
        # print(fruit_dict)
        # print('&&&&&&&&&&&&&&&&&&')
        frame = np.array(img)
        # RGBtoBGR满足opencv显示格式
        frame = cv2.cvtColor(frame,cv2.COLOR_RGB2BGR)     
        cv2.imshow("video",frame)
        
        if fruit_dict != {}:
            for ni in fruit_dict:
                # print(ni)
                (x, y) = fruit_dict[ni]
                # x, y = int(x),int(y)
                dis = aligned_depth_frame.get_distance(x, y)  #(x, y)点的真实深度值
                camera_coordinate = rs.rs2_deproject_pixel_to_point(depth_intrin, [x, y], dis)  #(x, y)点在相机坐标系下的真实值,为一个三维向量。其中camera_coordinate[2]仍为dis,camera_coordinate[0]和camera_coordinate[1]为相机坐标系下的xy真实距离。
                if (camera_coordinate[2] != 0.0 and camera_coordinate[0] != 0.0):
                    
                    print('-------------------------------')
                    print("相机坐标:", camera_coordinate)
                    print('-------------------------------')
                    print("抓取点深度:", dis)

                    

                    o = f1.pose.position.x = camera_coordinate[0]
                    p = f1.pose.position.y = camera_coordinate[1]
                    q = f1.pose.position.z = camera_coordinate[2]
                    # pub_1.publish(f)
                    br1.sendTransform((f.fruit_stamped.pose.position.x, f.fruit_stamped.pose.position.y, f.fruit_stamped.pose.position.z),
                    li.transformations.quaternion_from_euler(0, 0, 0),
                    rospy.Time.now(),
                    "object_fruit",
                    "camera_color_optical_frame")
                # rospy.loginfo("xyz分别为:{},{},{}".format(o,p,q))
                
        #     rospy.set_param("obj_pose",obj_pose)

        cv2.waitKey(1)
        key = cv2.waitKey(1)
        # Press esc or 'q' to close the image window
        if key & 0xFF == ord('q') or key == 27:
            pipeline.stop()
            break
    cv2.destroyAllWindows()

    
    

/home/lizhihao/catkin_ws/src/mfvs/script/pbvs_sub.py

指令:rosrun mfvs pbvs_sub.py

#!/usr/bin/env python

# 通过下列两句可以解决相对路径问题
from multiprocessing.connection import wait
import os, sys
from re import T
from time import sleep
from turtle import speed
# os.chdir(sys.path[0])

sys.path.append(os.path.join(os.path.dirname(__file__), '../../'))

from xarm_ros.xarm_sdk.xarm_sdkk.xarm.wrapper import XArmAPI

import rospy
import time


from mfvs.msg import CartVelCmd
# rad = 57.3

rad = 750  # !!!!!!!!!!!!!!!!不能超过这个倍数,770


# stop_velocity = [0, 0, 0, 0, 0, 0]
# velocity_1 = [a*rad for a in [1.74, 0, 0, 0, 0, 0]]
# velocity_2 = [a*rad for a in [0, -1.74, 0, 0, 0, 0]]

def getMarker(marker_1):
    # rospy.loginfo("订阅的人速度:%.4f, %.4f, %.4f, %.4f, %.4f, %.4f",marker_1.velocity.data[0],marker_1.velocity.data[1],marker_1.velocity.data[2],marker_1.velocity.data[3],marker_1.velocity.data[4],marker_1.velocity.data[5])
    rospy.loginfo(marker_1.velocity.data)
    rospy.loginfo("-----------------")
    velocity_1 = [marker_1.velocity.data[0],marker_1.velocity.data[1],marker_1.velocity.data[2],0,0,0]
    velocity = [a*rad for a in velocity_1]
    arm_sdk.vc_set_cartesian_velocity(velocity )
    time.sleep(0.05)
    
    
    

if __name__ == "__main__":
    #1.初始化节点
    rospy.init_node("visual_servoing_sub")

    
    # 启动机械臂---------------------------
    speed = 0.3
    arm_sdk = XArmAPI('192.168.1.208', is_radian=True)

    if (arm_sdk.error_code != 0): arm_sdk.clean_error()
    if (arm_sdk.warn_code != 0): arm_sdk.clean_warn()
    arm_sdk.motion_enable(True)
    arm_sdk.set_mode(0)
    arm_sdk.set_state(0)
    time.sleep(1)
    arm_sdk.set_servo_angle(angle = [0,-1.21,0,0.56,0,0.28,0], speed= speed, wait = True, is_radian=True)
    # print(arm_sdk.get_servo_angle(), arm_sdk.get_servo_angle(is_radian=True))

    # arm_sdk.reset(wait=True)
    arm_sdk.set_mode(5)
    arm_sdk.set_state(0)
    time.sleep(1)
    
    #2.创建订阅者对象
    sub = rospy.Subscriber("control_input",CartVelCmd,getMarker,queue_size=3)

    rospy.loginfo("================================")
   
    time.sleep(2)
    
    

    
    

    rospy.spin() #4.循环

你可能感兴趣的:(CNN,人工智能)