智能小车巡线功能的实现(python,ubuntu,opencv)

         From sztu 自动化专业的小菜鸡。

1.基本介绍

巡线代码存在于~\config\teleop\src\smartcar\scripts文件目录下的camera_cmd.py中,核心程序位于lane_detection函数中。

程序说明:

畸变矫正:

首先,我们需要对相机视野进行畸变校正,由于透镜制造精度以及组装工艺的偏差会引入畸变,导致原始图像的失真。镜头的畸变分为径向畸变和切向畸变两类。我们可以利用MATLAB得到相机的畸变校正参数。在MATLAB的Command Window里面输入cameraCalibrator即可调用标定应用,具体方法这里不再赘述,网上有许多教程(如Matlab 摄像机标定+畸变校正 - 采男孩的小蘑菇 - 博客园),需要注意的是distortionCoe中径向畸变RadialDistortion(k1,k2,k3)和切向畸变TangentialDistortion (p1,p2)的排列顺序为(k1,k2,p1,p2,k3)。另外,在MATLAB中得到的IntrinsicMatrix矩阵与opencv中的IntrinsicMatrix矩阵是转置的关系。

最终得到的畸变矫正参数如下所示:

智能小车巡线功能的实现(python,ubuntu,opencv)_第1张图片

利用opencv自带的cv2.undistort()函数可对原始图像进行畸变矫正。

透视变换:

透视变换(Perspective Transformation)是指利用透视中心、像点、目标点三点共线的条件,按透视旋转定律使承影面(透视面)绕迹线(透视轴)旋转某一角度,破坏原有的投影光线束,仍能保持承影面上投影几何图形不变的变换。本段代码的目的是将摄像头捕捉到的画面转化为鸟瞰图,方便之后计算和规划路线。

首先,我们需要准备标定矩形,我们使用的赛道是60cm宽,因此,我们的标定矩形同样使用60x60cm的矩形(四个蓝色标志物分别代表标定矩形的四个顶点)。

之后需要将畸变校正后的相机视野图片输出出来,可以利用cv2.imwrite()函数,如下图所示:

智能小车巡线功能的实现(python,ubuntu,opencv)_第2张图片

之后,将图片传至远程计算机本地,使用画图工具打开,光标分别停留在标定矩形的四角,从左上角开始沿逆时针依次记录四点坐标,按顺序填写在src_pts矩阵中,dst_pts矩阵代表变换之后的标定矩阵在图像中的坐标位置如下图所示:

利用opencv自带的cv2.getPerspectiveTransform()函数可以得到透视变换矩阵,畸变矫正后的图像乘以透视变换矩阵就可以得到赛道鸟瞰图:

智能小车巡线功能的实现(python,ubuntu,opencv)_第3张图片

智能小车巡线功能的实现(python,ubuntu,opencv)_第4张图片

车道线拟合与跟踪:

当我们得到车道线鸟瞰图之后,就需要对车道线进行拟合,并通过车道线的斜率以及截距来确定智能车的控制策略。

相机视野中的车道线主要可以分为两种情况:

  • 视野中两条车道线都可以看到:

利用find_starter_centroids()函数找到视野顶部和底部车道线的x轴坐标值:

将两个值作差,如果小于某一个阈值则认为此时车道线斜率较大,此时可以看到两条车道线,利用顶部和底部车道线的x轴坐标值的偏差来控制智能车的转向:

智能小车巡线功能的实现(python,ubuntu,opencv)_第5张图片

  • 视野中只有一条车道线可以看到:

当①中的offset大于某一个阈值时则认为视野中只能看到一条车道线,此时只能根据一条车道线来对智能车进行控制。

我们需要利用cv2.HoughLinesP()函数来对车道线进行拟合,进而计算出车道线的斜率以及截距:

智能小车巡线功能的实现(python,ubuntu,opencv)_第6张图片

上图中的c1、c2分别代表车道线的斜率以及截距。

利用截距我们可以判断出此时视野中是左车道线还是右车道线,因为图像的大小为640*480,所以我们可以通过比较截距与320的大小来判断车道线的左右,进而利用截距和斜率求出此时智能车的转向角。

智能小车巡线功能的实现(python,ubuntu,opencv)_第7张图片

巡线程序启动说明:

打开终端,输入以下命令:

roscore

智能小车巡线功能的实现(python,ubuntu,opencv)_第8张图片

    在~/config/cam_test_ws目录下打开终端,输入如下指令:

智能小车巡线功能的实现(python,ubuntu,opencv)_第9张图片

在~/config/teleop目录下打开终端,输入如下指令:

智能小车巡线功能的实现(python,ubuntu,opencv)_第10张图片

    在~/config/teleop/src/smartcar/scripts目录下打开终端,输入如下指令,即可将巡线程序启动:

智能小车巡线功能的实现(python,ubuntu,opencv)_第11张图片

2.代码展示

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import numpy as np
import cv2
import matplotlib.pyplot as plt
from collections import deque
import rospy
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
from ackermann_msgs.msg import AckermannDriveStamped
from laser_test.msg import laser_control
import jetson.inference
import jetson.utils
import argparse
import sys
 
import math
import os
#from skimage import morphology
import time
global x0 
x0 = 1
global peak_thresh
peak_thresh = 50 
global n 
n = 0
global laser_cmd
laser_cmd = 0
global msg 
msg = AckermannDriveStamped()
global flag1 
flag1 = 0
global flag2 
flag2 = 0
global flag3 
flag3 = 0
global c0,c1
intrinsicMat = np.array([[489.3828, 0.8764, 297.5558],
                            [0, 489.8446, 230.0774],
                            [0, 0, 1]])
distortionCoe = np.array([-0.4119,0.1709,0,0.0011, 0.018])
 
 
src_pts = np.float32([[128,378],[1,435],[639,435],[488,378]])#[220,306],[1,435],[639,435],[451,306]   [[132,358],[1,435],[639,435],[501,353] [160,334],[38,378],[628,378],[502,334] 
dst_pts = np.float32([[70,0],[70,480],[570,480],[570,0]])
showMe = 0
 
 
#net = jetson.inference.imageNet("resnet-18",labels="labels.txt")
def guide_board(data):
 
    #net = jetson.inference.imageNet(opt.network, sys.argv)
    #img = CvBridge().imgmsg_to_cv2(data, "bgr8")
    image = cv2.cvtColor(data, cv2.COLOR_BGR2HSV)
    #blue_lower=np.array([100,50,50])
    # blue_upper=np.array([124,255,255])
    lower_blue = np.array([100, 50, 50])
    upper_blue = np.array([124, 255, 255])
    mask = cv2.inRange(image, lower_blue, upper_blue)
    imge0 = cv2.bitwise_and(data, data, mask=mask)
    cv2.imwrite('123.jpg',imge0)
    #data0 = np.array(data)
    #data = jetson.utils.cudaMemory(data0)
    #img = jetson.utils.cudaCrop(data)
    img = jetson.utils.loadImage("123.jpg")
    class_idx, confidence = net.Classify(img)
    print(class_idx,confidence)
    return class_idx,confidence
    
 
def display(img,title,color=1):
    '''
    func:display image
    img: rgb or grayscale
    title:figure title
    color:show image in color(1) or grayscale(0)
    '''
    if showMe:
        if color:
            plt.imshow(img)
        else:
            plt.imshow(img ,cmap='gray')
        plt.title(title)
        plt.axis('off')
        plt.show()
 
 
def light_detection(origin_img):      
    light_cmd=0               
    hsv=cv2.cvtColor(origin_img,cv2.COLOR_BGR2HSV)   
    hsv1 = hsv.copy()
    element = cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
    red_lower = np.array([0,95,230])        #这两个是阈值,红灯的上界和下界
    red_upper = np.array([5,255,255])
    red_mask = cv2.inRange(hsv,red_lower,red_upper)
    red_target = cv2.bitwise_and(hsv,hsv,mask = red_mask)
    red_target = cv2.erode(red_target,element)
    red_target = cv2.dilate(red_target,element)
    red_gray = cv2.cvtColor(red_target,cv2.COLOR_BGR2GRAY)
    r_ret,r_binary = cv2.threshold(red_mask,127,255,cv2.THRESH_BINARY)
    r_gray2 = cv2.Canny(r_binary, 100, 200) 
    r = r_gray2[:,:] == 255
    count_red = len(r_gray2[r])
    if count_red>700:
        redLight = 1
    else:
        redLight = 0
    green_lower = np.array([70,95,230])    
    green_upper = np.array([130,255,255])
    green_mask = cv2.inRange(hsv1,green_lower,green_upper)
    green_target = cv2.bitwise_and(hsv1,hsv1,mask = green_mask)
    green_target = cv2.erode(green_target,element)
    green_target = cv2.dilate(green_target,element)
    green_gray = cv2.cvtColor(green_target,cv2.COLOR_BGR2GRAY)
    g_ret,g_binary = cv2.threshold(green_mask,127,255,cv2.THRESH_BINARY)
    g_gray2 = cv2.Canny(g_binary, 100, 200)       
    g = g_gray2[:,:] == 255
    count_green = len(g_gray2[g])
    if count_green>700:
        greenLight = 1
    else:
        greenLight = 0
    if redLight ==1 :
        light_cmd = 0
    if greenLight == 1 :
        light_cmd = 1
    return light_cmd
def direction_test(image,tem):
 
    #cv.imshow('canny',canny)
    #cv.waitKey(3)
    undstrt = cv2.undistort(image, intrinsicMat, distortionCoe, None, intrinsicMat)
    #masked,x,y,w,h = findSquare(undstrt)
    gray = cv2.cvtColor(undstrt, cv2.COLOR_BGR2GRAY)
    
    tem = cv2.cvtColor(tem, cv2.COLOR_BGR2GRAY)
    canny = cv2.Canny(gray, 100, 200)
    tem_result = cv2.matchTemplate(canny, tem, 4)
    #print((left.max(),right.max()))
    '''
    l = left.max()
    r = right.max()
    print(max(l,r))
    if l<2000000 and r<2000000:
        print('null)
    if l>r:
        print('left')
        return -1
    else:
        print('right')
        return 1
        '''
   # print(tem_result.max())
    return tem_result.max()
def turn(img):
    list=[0,0,0,0,0]
    park_max=0
    gostraight_max=0
    turnleft_max=0
    turnright_max=0
    turnback_max=0
    for filename in os.listdir(r'/home/nano/config/teleop/src/smartcar/scripts/template/park'): 
        
        filenames = '/home/nano/config/teleop/src/smartcar/scripts/template/park/'+ filename
 
        tem = cv2.imread(filenames)
        park = direction_test(img,tem)
        if park>park_max:
            park_max=park
    #list[0]=park_max
    for filename in os.listdir(r'/home/nano/config/teleop/src/smartcar/scripts/template/gostraight'): 
 
        filenames = '/home/nano/config/teleop/src/smartcar/scripts/template/gostraight/'+filename
 
        tem = cv2.imread(filenames)
        gostraight = direction_test(img,tem)
        if gostraight>gostraight_max:
            gostraight_max=gostraight
    list[1]=gostraight_max
    for filename in os.listdir(r'/home/nano/config/teleop/src/smartcar/scripts/template/turnleft'): 
        
        filenames = '/home/nano/config/teleop/src/smartcar/scripts/template/turnleft/'+ filename
 
        tem = cv2.imread(filenames)
        turnleft = direction_test(img,tem)
        if turnleft>turnleft_max:
            turnleft_max=turnleft
    list[2]=turnleft_max
    for filename in os.listdir(r'/home/nano/config/teleop/src/smartcar/scripts/template/turnright'): 
        
        filenames = '/home/nano/config/teleop/src/smartcar/scripts/template/turnright/'+ filename
 
        tem = cv2.imread(filenames)
        turnright = direction_test(img,tem)
        if turnright>turnright_max:
            turnright_max=turnright
    list[3]=turnright_max
    for filename in os.listdir(r'/home/nano/config/teleop/src/smartcar/scripts/template/turnback'): 
        
        filenames = '/home/nano/config/teleop/src/smartcar/scripts/template/turnback/'+ filename
 
        tem = cv2.imread(filenames)
        turnback = direction_test(img,tem)
        if turnback>turnback_max:
            turnback_max=turnback
    #list[4]=turnback_max
    max = np.max(list)    #最大值
    print(max)
    max_p = np.where(list==max)[0]  
    #if left<2000000 and right<2000000:
        #delay(0.1,27,23)
    #print('!!!!')
    #print((left,right))
    print('park_max',park_max)
    print('gostraight_max',gostraight_max)
    print('turnleft_max',turnleft_max)
    print('turnright_max',turnright_max)
    print('turnback_max',turnback_max)
    if max_p==0:
        print('park')
    if max_p==1:
        print('gostraight')
    if max_p==2:
        print('turnleft')
    if max_p==3:
        print('turnright')
    if max_p==4:
        print('turnback')
    return max_p, park_max,gostraight_max,turnleft_max,turnright_max,turnback_max
 
def birdView(img,M):
    '''
    Transform image to birdeye view
    img:binary image
    M:transformation matrix
    return a wraped image
    '''
    img_sz = (img.shape[1],img.shape[0])
    img_warped = cv2.warpPerspective(img,M,img_sz,flags = cv2.INTER_LINEAR)
    return img_warped
def perspective_transform(src_pts,dst_pts):
    '''
    perspective transform
    args:source and destiantion points
    return M and Minv
    '''
    M = cv2.getPerspectiveTransform(src_pts,dst_pts)
    Minv = cv2.getPerspectiveTransform(dst_pts,src_pts)
    return {'M':M,'Minv':Minv}
# original image to bird view (transformation)
 
 
def find_centroid(image,peak_thresh,window,showMe):
    '''
    find centroid in a window using histogram of hotpixels
    img:binary image
    window with specs {'x0','y0','width','height'}
    (x0,y0) coordinates of bottom-left corner of window
    return x-position of centroid ,peak intensity and hotpixels_cnt in window
    '''
    #crop image to window dimension
    mask_window = image[int(window['y0']-window['height']):int(window['y0']),
                        int(window['x0']):int(window['x0']+window['width'])]
    histogram = np.sum(mask_window,axis=0)
    centroid = np.argmax(histogram)
    hotpixels_cnt = np.sum(histogram)
    peak_intensity = histogram[centroid]
    if peak_intensity<=peak_thresh:
        centroid = int(round(window['x0']+window['width']/2))
        peak_intensity = 0
    else:
        centroid = int(round(centroid+window['x0']))
    '''
    if showMe:
        plt.plot(histogram)
        plt.title('Histogram')
        plt.xlabel('horzontal position')
        plt.ylabel('hot pixels count')
        plt.show()
    '''
    return (centroid,peak_intensity,hotpixels_cnt)
def find_starter_centroids(image,y0,peak_thresh,showMe):
    '''
    find starter centroids using histogram
    peak_thresh:if peak intensity is below a threshold use histogram on the full height of the image
    returns x-position of centroid and peak intensity
    '''
    window = {'x0':0,'y0':y0,'width':image.shape[1],'height':image.shape[0]/5}
    # get centroid
    centroid , peak_intensity,_ = find_centroid(image,peak_thresh,window,showMe)
    if peak_intensity image.shape[1]:
            window['x0'] = image.shape[1] - sliding_window_specs['width']
        centroid, peak_intensity, hotpixels_cnt = find_centroid(image, peak_thresh, window, showMe=0)
        if step == 0:
            starter_centroid = centroid
        if hotpixels_cnt / (window['width'] * window['height']) > 0.6:
            window['width'] = window['width'] * 2
            window['x0'] = round(window['x0'] - window['width'] / 2)
            if (window['x0'] < 0): window['x0'] = 0
            if (window['x0'] + window['width']) > image.shape[1]:
                window['x0'] = image.shape[1] - window['width']
            centroid, peak_intensity, hotpixels_cnt = find_centroid(image, peak_thresh, window, showMe=0)
 
            # if showMe:
            # print('peak intensity{}'.format(peak_intensity))
            # print('This is centroid:{}'.format(centroid))
        mask_window = np.zeros_like(image)
        mask_window[int(window['y0'] - window['height']):int(window['y0']), int(window['x0']):int(window['x0'] + window['width'])] = image[int(window['y0'] - window['height']):int(window['y0']), int(window['x0']):int(window['x0'] + window['width'])]
 
        hotpixels = np.nonzero(mask_window)
        # print(hotpixels_log['x'])
 
        hotpixels_log['x'].extend(hotpixels[0].tolist())
        hotpixels_log['y'].extend(hotpixels[1].tolist())
        # update record of centroid
        centroids_log.append(centroid)
        out_img = cv2.rectangle(image,(int(window['x0']), int(window['y0'] - window['height'])), (int(window['x0'] + window['width']), int(window['y0'])), (255, 0, 0), 2)
        
        ''' 
        if Left_or_Right == 0:
            result[0:image.shape[0],0:image.shape[1]] = out_img
        if Left_or_Right == 1:
            result[0:image.shape[0],image.shape[1]:image.shape[1]*2] = out_img
            
       
        
        if showMe:
            cv2.rectangle(out_img,
                          (int(window['x0']), int(window['y0'] - window['height'])),
                          (int(window['x0'] + window['width']), int(window['y0'])), (0, 255, 0), 2)
            if step == 9:
                plt.imshow(out_img)
                plt.show()
            
            print(window['y0'])
            plt.imshow(out_img)
         
        '''
        # set next position of window and use standard sliding window width
        window['width'] = sliding_window_specs['width']
        window['x0'] = round(centroid - window['width'] / 2)
        window['y0'] = window['y0'] - window['height']
    return hotpixels_log, out_img
 
 
 
 
def MahalanobisDist(x, y):
    '''
    Mahalanobis Distance for bi-variate distribution
    '''
    covariance_xy = np.cov(x, y, rowvar=0)
    inv_covariance_xy = np.linalg.inv(covariance_xy)
    xy_mean = np.mean(x), np.mean(y)
    x_diff = np.array([x_i - xy_mean[0] for x_i in x])
    y_diff = np.array([y_i - xy_mean[1] for y_i in y])
    diff_xy = np.transpose([x_diff, y_diff])
 
    md = []
    for i in range(len(diff_xy)):
        md.append(np.sqrt(np.dot(np.dot(np.transpose(diff_xy[i]), inv_covariance_xy), diff_xy[i])))
    return md
 
 
def MD_removeOutliers(x, y, MD_thresh):
    '''
    remove pixels outliers using Mahalonobis distance
    '''
    MD = MahalanobisDist(x, y)
    threshold = np.mean(MD) * MD_thresh
    nx, ny, outliers = [], [], []
    for i in range(len(MD)):
        if MD[i] <= threshold:
            nx.append(x[i])
            ny.append(y[i])
        else:
            outliers.append(i)
    return (nx, ny)
 
 
 
 
def update_tracker(tracker,new_value):
    '''
    update tracker(self.bestfit or self.bestfit_real or radO Curv or hotpixels) with new coeffs
    new_coeffs is of the form {'a2':[val2,...],'a1':[va'1,...],'a0':[val0,...]}
    tracker is of the form {'a2':[val2,...]}
    update tracker of radius of curvature
    update allx and ally with hotpixels coordinates from last sliding window
    '''
    allx = []
    ally = []
    if tracker =='bestfit':
        bestfit['a0'].append(new_value['a0'])
        bestfit['a1'].append(new_value['a1'])
        bestfit['a2'].append(new_value['a2'])
    elif tracker == 'bestfit_real':
        bestfit_real['a0'].append(new_value['a0'])
        bestfit_real['a1'].append(new_value['a1'])
        bestfit_real['a2'].append(new_value['a2'])
    elif tracker == 'radOfCurvature':
        radOfCurv_tracker.append(new_value)
    elif tracker == 'hotpixels':
        allx.append(new_value['x'])
        ally.append(new_value['y'])
 
def polynomial_fit(data):
    '''
    多项式拟合
    a0+a1 x+a2 x**2
    data:dictionary with x and y values{'x':[],'y':[]}
    '''
    a2,a1,a0 = np.polyfit(data['x'],data['y'],2)
    return {'a0':a0,'a1':a1,'a2':a2}
 
 
def predict_line(x0,xmax,coeffs):
    '''
    predict road line using polyfit cofficient
    x vaues are in range (x0,xmax)
    polyfit coeffs:{'a2':,'a1':,'a2':}
    returns array of [x,y] predicted points ,x along image vertical / y along image horizontal direction
    '''
    x_pts = np.linspace(x0,xmax-1,num=xmax)
    pred = coeffs['a2']*x_pts**2+coeffs['a1']*x_pts+coeffs['a0']
    return np.column_stack((x_pts,pred))
 
 
class PID:
    def __init__(self, P=0.2, I=0.0, D=0.0):
        self.Kp = P
        self.Ki = I
        self.Kd = D
        self.sample_time = 0.00
        self.current_time = time.time()
        self.last_time = self.current_time
        self.clear()
    def clear(self):
        self.SetPoint = 0.0
        self.PTerm = 0.0
        self.ITerm = 0.0
        self.DTerm = 0.0
        self.last_error = 0.0
        # Windup Guard
        self.int_error = 0.0
        self.windup_guard = 20.0
        self.output = 0.0
    def update(self, feedback_value):     
        error = self.SetPoint - feedback_value
        self.current_time = time.time()
        delta_time = self.current_time - self.last_time
        delta_error = error - self.last_error
        if (delta_time >= self.sample_time):
            self.PTerm = self.Kp * error
            self.ITerm += error * delta_time
            if (self.ITerm < -self.windup_guard):
                self.ITerm = -self.windup_guard
            elif (self.ITerm > self.windup_guard):
                self.ITerm = self.windup_guard
            self.DTerm = 0.0
            if delta_time > 0:
                self.DTerm = delta_error / delta_time
            self.last_time = self.current_time
            self.last_error = error
            self.output = self.PTerm + (self.Ki * self.ITerm) + (self.Kd * self.DTerm)
    def setKp(self, proportional_gain):
        self.Kp = proportional_gain
    def setKi(self, integral_gain):
        self.Ki = integral_gain
    def setKd(self, derivative_gain):
        self.Kd = derivative_gain
    def setWindup(self, windup):     
        self.windup_guard = windup
    def setSampleTime(self, sample_time):
        self.sample_time = sample_time
 
def compute_radOfCurvature(coeffs,pt):
    return ((1+(2*coeffs['a2']*pt+coeffs['a1'])**2)**1.5)/np.absolute(2*coeffs['a2'])
def binarize(img):
    """Binarize a grayscale image.
    Binarize the input grayscale image by ostu threshold method.
    Args:
        img: an image. Grayscale image is preffered.
    Returns:
       img_binary: the binarized image
    """
 
    # Make sure that img_gray is a grayscale image.
    if len(img.shape) == 2:
        img_gray = img
    elif len(img.shape) == 3 and img.shape[2] == 3:
        img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    else:
        #print("Converting image failed:", img.shape)
        return None
    # Apply the threshold method. It can be improved by changing the arguments.
    _, img_binary = cv2.threshold(
        img_gray, 170, 255, cv.THRESH_OTSU)
 
    return img_binary
 
def lane_detection(img):
    start_time=time.time()
    corr_img = cv2.undistort(img, intrinsicMat, distortionCoe, None, intrinsicMat)
    #cv2.imwrite('000.jpg',corr_img)
    cv2.imshow('corr_img',corr_img)
    cv2.waitKey(25)
    gray_ex = cv2.cvtColor(corr_img,cv2.COLOR_RGB2GRAY)
    display(gray_ex,'Apply Camera Correction',color=0)
    #ret, combined_output = cv2.threshold(gray_ex, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
    combined_output = cv2.Canny(gray_ex, 400, 800) #100, 200 75,200
    #cv2.imwrite('001.jpg',combined_output)
 
    #combined_output = image_process(gray_ex)
    display(combined_output,'Combined output',color=0)
    #mask = np.zeros_like(combined_output)
    #vertices = np.array([[(300,278),(0,435),(640,435),(450,250)]],dtype=np.int32)
    #cv2.fillPoly(mask,vertices,1)
    #masked_image = cv2.bitwise_and(combined_output,mask)
    #display(masked_image,'Masked',color=0)
    
    min_sz = 50
    #cleaned =              morphology.remove_small_objects(masked_image.astype('bool'),min_size=min_sz,connectivity=2)
    cleaned = combined_output
    display(cleaned,'cleaned',color=0)
    # original image to bird view (transformation)
   
    transform_matrix = perspective_transform(src_pts,dst_pts)
    warped_image = birdView(cleaned*1.0,transform_matrix['M'])
    
    warped_image = cv2.dilate(warped_image, np.ones((15,15), np.uint8))
    warped_image = cv2.erode(warped_image, np.ones((7,7), np.uint8))
    #cv2.imwrite('002.jpg',warped_image)
    #pubbrid_view.publish(CvBridge().cv2_to_imgmsg(warped_image))
    
    display(cleaned,'undistorted',color=0)
    display(warped_image,'BirdViews',color=0)
    
    #white_Left = cv2.countNonZero(warped_image[:,0:warped_image.shape[1]/2-50])
    #white_Right = cv2.countNonZero(warped_image[:,warped_image.shape[1]/2+50:warped_image.shape[1]])
######mid_time
    mid_time=time.time()
    HoughLine_image = np.array(warped_image,np.uint8)
    lines = cv2.HoughLinesP(HoughLine_image,1,np.pi/180,100,100,100,50)
    if lines is not None :
        for x1,y1,x2,y2 in lines[0]:
            cv2.line(HoughLine_image,(x1,y1),(x2,y2),(255,0,0),1)
########################################################################fit##################################################################################
 
    bottom_crop = -40
    #warped_image = warped_image[0:bottom_crop,:]
    peak_thresh = 10
    showMe = 1
 
 
    centroid_starter_top = find_starter_centroids(warped_image,y0=warped_image.shape[0],
                                               peak_thresh=peak_thresh,showMe=showMe)
    centroid_starter_bottom = find_starter_centroids(warped_image,y0=warped_image.shape[0]/5,
                                               peak_thresh=peak_thresh,showMe=showMe)
    print('centroid_starter_TOP',centroid_starter_top['centroid'])
    print('centroid_starter_BOTTOM',centroid_starter_bottom['centroid'])
 
 
  
    ''' 
    centroid_starter_right = find_starter_centroids(warped_image,x0=warped_image.shape[1]/2,
                                               peak_thresh=peak_thresh,showMe=showMe)
    centroid_starter_left = find_starter_centroids(warped_image,x0=0,peak_thresh=peak_thresh,
                                              showMe=showMe)
    sliding_window_specs = {'width': 60, 'n_steps': 10}
    print('centroid_starter_right',centroid_starter_right)
    #cv2.imshow('warped_image',warped_image)
    #cv2.waitKey(25)
####window#####
   
    '''
 
    MD_thresh = 1.8
    #log_lineLeft['x'], log_lineLeft['y'] = \
    #MD_removeOutliers(log_lineLeft['x'], log_lineLeft['y'], MD_thresh)
    #log_lineRight['x'], log_lineRight['y'] = \
    #MD_removeOutliers(log_lineRight['x'], log_lineRight['y'], MD_thresh)
    
    ym_per_pix = 0.6/480
    xm_per_pix = 0.6/640
 
    ###log_lineLeft,out_img_Left = run_sliding_window(warped_image, centroid_starter_left['centroid'],sliding_window_specs, showMe=showMe)
    ###log_lineRight,out_img = run_sliding_window(out_img_Left, centroid_starter_right['centroid'] , sliding_window_specs,showMe=showMe)
 
    global c0,c1 ##################################
    if lines is not None :
        for i in range(warped_image.shape[1]/5):
            x = 5*i
            px = [x1,x2]
            py = [y1,y2]
            c1,c0 = np.polyfit(px, py, 1)
            y = int(c1*x + c0)
            cv2.circle(warped_image, (x, y), 3, (255, 0, 0), -1)
            
        print('HoughLine is detected') 
        print('c1',c1)  
    cv2.imshow('warped_image',warped_image)
    cv2.waitKey(25)
    
    ''' 
    for i in range(len(log_lineRight['y'])):
        log_lineRight['y'][i] = log_lineRight['y'][i] + out_img.shape[1]/2
    
    #log_lineRight['x'] = log_lineRight['x'] + out_img.shape[1]/2
    fit_lineRight_singleframe = polynomial_fit(log_lineRight)
    fit_lineLeft_singleframe = polynomial_fit(log_lineLeft)
    dis_Left = log_lineLeft['y'][len(log_lineLeft['y'])-1] - log_lineLeft['y'][0]
    dis_Right = log_lineRight['y'][len(log_lineRight['y'])-1] - log_lineRight['y'][0]
    var_pts = np.linspace(0,corr_img.shape[0]-1,num=corr_img.shape[0])
    pred_lineLeft_singleframe = predict_line(0,corr_img.shape[0],fit_lineLeft_singleframe)
    fit_lineLeft_real = polynomial_fit({'x':[i*xm_per_pix for i in log_lineLeft['x']],
                                    'y':[i*ym_per_pix for i in log_lineLeft['y']]})
    pred_lineRight_sigleframe = predict_line(0,corr_img.shape[0],fit_lineRight_singleframe)
    fit_lineRight_real = polynomial_fit({'x':[i*xm_per_pix for i in log_lineRight['x']],
                                               'y':[i*ym_per_pix for i in log_lineRight['y']]})
   
    pt_curvature = corr_img.shape[0]
    radOfCurv_r = compute_radOfCurvature(fit_lineRight_real,pt_curvature*ym_per_pix)
    radOfCurv_l = compute_radOfCurvature(fit_lineLeft_real,pt_curvature*ym_per_pix)
    average_radCurv = (radOfCurv_r+radOfCurv_l)/2
    
    center_of_lane = (pred_lineLeft_singleframe[:,1][-1]+pred_lineRight_sigleframe[:,1][-1])/2
    offset = (corr_img.shape[1]/2 - center_of_lane)*xm_per_pix
    side_pos = 'right'
    if offset <0:
        side_pos = 'left'
    wrap_zero = np.zeros_like(gray_ex).astype(np.uint8)
    color_wrap = np.dstack((wrap_zero,wrap_zero,wrap_zero))
    left_fitx = fit_lineLeft_singleframe['a2']*var_pts**2 + fit_lineLeft_singleframe['a1']*var_pts + fit_lineLeft_singleframe['a0']
    right_fitx = fit_lineRight_singleframe['a2']*var_pts**2 +     fit_lineRight_singleframe['a1']*var_pts+fit_lineRight_singleframe['a0']
    pts_left = np.array([np.transpose(np.vstack([left_fitx,var_pts]))])
    pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx,var_pts])))])
    pts = np.hstack((pts_left,pts_right))
    cv2.fillPoly(color_wrap,np.int_([pts]),(0,255,0))
    cv2.putText(color_wrap,'|',(int(corr_img.shape[1]/2),corr_img.shape[0]-10),cv2.FONT_HERSHEY_SIMPLEX,2,(0,0,255),8)
    cv2.putText(color_wrap,'|',(int(center_of_lane),corr_img.shape[0]-10),cv2.FONT_HERSHEY_SIMPLEX,1,(255,0,0),8)
    newwrap = cv2.warpPerspective(color_wrap,transform_matrix['Minv'],(corr_img.shape[1],corr_img.shape[0])) 
    result = cv2.addWeighted(corr_img,1,newwrap,0.3,0)
    cv2.putText(result,'Vehicle is ' + str(round(offset,3))+'m '+side_pos+' of center',
            (50,100),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),thickness=2)
    cv2.putText(result,'Radius of curvature: '+str(round(average_radCurv,3))+'m',(50,50),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),thickness=2)
   # cv2.imshow("result",result)
    #cv2.waitKey(25)
    '''
    end_time=time.time()
 
    time_diff1=mid_time-start_time
    time_diff2=end_time-mid_time
    print('time_diff1',time_diff1)
    print('time_diff2',time_diff2)
    #print('offset ', offset)
    msg.drive.speed = 20
    
    #print('dis_Left',dis_Left)
    #print('dis_Right',dis_Right)
    if  math.fabs(centroid_starter_top['centroid']-centroid_starter_bottom['centroid'])<80:             
  ##((dis_Left < 90)&(dis_Left > 30) & (dis_Right > -10 )&(dis_Right < 70 )) :#四个dis的阈值都需要调参
        print('both of the lanes are detected')
        offset=centroid_starter_top['centroid']-centroid_starter_bottom['centroid']
        Vehicle_PID.update(offset)
        msg.drive.steering_angle = Vehicle_PID.output###正负未测试
        
    else :
        print('only single lane is detected')
        k1 = 20#50  45# k1,k2 需要调参
        k2 = 0.02#0.02  0.035
#####need to adjust para####
 
        if (480-c0)/c1<320:
            msg.drive.steering_angle =  1/c1*k1 - k2*(480- c0)/c1 
            print('Left Lane')
                                 
        if (480-c0)/c1>320:
            msg.drive.steering_angle =  1/c1*k1 + k2*(640-(480-c0)/c1)  
            print('Right Lane')     
 
          
    print('steering_angle',msg.drive.steering_angle)
    
 
 
def blue_pixel_detection(img):
 
    cross_verify = 0
    corr_img = cv2.undistort(img, intrinsicMat, distortionCoe, None, intrinsicMat)
    hsv=cv2.cvtColor(corr_img,cv2.COLOR_BGR2HSV)
    #cv2.imshow('hsv',hsv)
#提取蓝色区域
 
    blue_lower=np.array([100,50,50])
    blue_upper=np.array([124,255,255])
    blue_mask=cv2.inRange(hsv,blue_lower,blue_upper)
#模糊
    blue_blurred=cv2.blur(blue_mask,(9,9))
    #cv2.imshow('blurred',blurred)
#二值化
    ret,binary=cv2.threshold(blue_blurred,127,255,cv2.THRESH_BINARY)
    transform_matrix = perspective_transform(src_pts,dst_pts)
    blue_warped_image = birdView(binary*1.0,transform_matrix['M'])
    cv2.imshow('blurred binary',blue_warped_image)
    cv2.waitKey(25)
    blue_point = cv2.countNonZero(blue_warped_image)
    return blue_point
 
 
def traffic_sign_detection(img):
    global flag1
    global flag2
    global sign
    global park_max
    global gostraight_max
    global turnleft_max
    global turnright_max
    global turnback_max
    global time1
    threshold = 80000000############threshold 需要调参
    blue_point = blue_pixel_detection(img)
    print('The number of blue pixels',blue_point)
    if (blue_point < threshold) & (flag1 == 0):
        lane_detection(img)
        
    else:
        flag1 = 1
        
        print('Parking line is detected')
 
        
        if (flag2 == 0):
            
	    #######speed=0
            msg.drive.speed = 0
            pub.publish(msg)
            time.sleep(1)
            #sign,park_max,gostraight_max,turnleft_max,turnright_max,turnback_max = turn(img)
            sign,_ =guide_board(img)
            flag2 = 1
            time1 = time.time()
        else:           
            if(sign == 0):
                msg.drive.steering_angle = 0
                msg.drive.speed = 20
            if(sign == 1):
                msg.drive.steering_angle = 10#以下四个角度都需要调参
                msg.drive.speed = 20
            if(sign == 2):
                msg.drive.steering_angle = -10
                msg.drive.speed = 20
            if(sign == 3):
                msg.drive.steering_angle = 0
                msg.drive.speed = 00
                
            if(sign == 4):
                msg.drive.steering_angle = 11
                msg.drive.speed = 20
            time2 = time.time()
            if (time2 - time1>3) & (blue_point > threshold):
                flag1 = 0
            #print('park_max',park_max)
            #print('gostraight_max',gostraight_max)
            #print('turnleft_max',turnleft_max)
            #print('turnright_max',turnright_max)
            #print('turnback_max',turnback_max)
    print('The speed is ', msg.drive.speed)   
    print('The steering angle is ',  msg.drive.steering_angle)
    pub.publish(msg)
def front_camera_callback(data):
    #class_idx, confidence = guide_board(data)
    #print(class_idx,confidence)
    
    time1=time.time()
    img = CvBridge().imgmsg_to_cv2(data, "bgr8")
    global n
 
    print(laser_cmd)
    if laser_cmd == 0:
        if(n>=1):
            traffic_sign_detection(img)
        else:
            if(1):#light_detection(img)
                
    	        n=n+1
            else:
                msg.drive.speed = 0
                pub.publish(msg)
                n = n
    #lane_detection(img)
    time2=time.time()
    print('totaltime',time2-time1)
    
def laser_callback(data):
    global laser_cmd
    laser_cmd = data.laser_control
 
def findSquare( image ):
        gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        blurred = cv2.GaussianBlur(gray, (7, 7), 0)
        edged = cv2.Canny(blurred, 60, 120)
        cv2.imshow("edged",edged  )
        cv2.waitKey(25)
        # find contours in the edge map
        (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
        # loop over our contours to find hexagon
        cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:50]
        screenCnt = None
        for c in cnts:
            # approximate the contour
            peri = cv2.arcLength(c, True)
            approx = cv2.approxPolyDP(c, 0.004 * peri, True)
            # if our approximated contour has four points, then
            # we can assume that we have found our squeare
 
            if len(approx) >= 4:
                screenCnt = approx
                x,y,w,h = cv2.boundingRect(c)
                cv2.drawContours(image, [approx], -1, (0, 0, 255), 1)
                #cv2.imshow("Screen", image)
                #create the mask and remove rest of the background
                mask = np.zeros(image.shape[:2], dtype = "uint8")
                cv2.drawContours(mask, [screenCnt], -1, 255, -1)
                masked = cv2.bitwise_and(image, image, mask = mask)
                cv2.circle(masked, (int(x+w/2), int(y+h/2)), 3, (0, 0, 255), -1)
                cv2.imshow("Masked",masked  )
                cv2.waitKey(25)
                #cv2.imwrite('004.jpg',masked)
                #crop the masked image to to be compared to referance image
                #cropped = masked[y:y+h,x:x+w]
                #scale the image so it is fixed size as referance image
                #cropped = cv2.resize(cropped, (200,200), interpolation =cv2.INTER_AREA)
 
                return masked,x,y,w,h
 
def rear_camera_callback(data):   
    global flag3 
    img = CvBridge().imgmsg_to_cv2(data, "bgr8")
    corr_img = cv2.undistort(img, intrinsicMat, distortionCoe, None, intrinsicMat)
    #cv2.imwrite('005.jpg',corr_img)
    down_img = corr_img[:, corr_img.shape[1]/2:corr_img.shape[1]]
    masked,x,y,w,h = findSquare(down_img)
    #k1,k2,k3,k4,k5,k6=turn(img)
    cv2.waitKey(25)
    dis = int(-1.05*w)##dis需要调参
    cv2.circle(corr_img, (int(x+w/2+corr_img.shape[1]/2), int(y+h/2)), 3, (0, 255, 255), -1)
    cv2.circle(corr_img, (int(x+w/2+dis+corr_img.shape[1]/2), int(y+h/2)), 3, (0, 0, 255), -1)
    cv2.circle(corr_img, (corr_img.shape[1]/2, int(y+h/2)), 3, (255, 0, 0), -1)
    cv2.imshow('result',corr_img)  
    #cv2.imwrite('003.jpg',corr_img)
    print('w= ',w,'h= ',h)
			
    if laser_cmd == 0:        
        msg.drive.speed = -20				
        if (w <180) & (flag3 == 0):#w<180需要调参
       
            offset = - (dis + x+w/2)
            print('offset',offset)
            Reverse_PID.update(offset)
            msg.drive.steering_angle = Reverse_PID.output
        else:
            msg.drive.steering_angle =   0
            flag3 = 1
            '''
            gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            edged = cv2.Canny(gray, 400, 800)
            transform_matrix = perspective_transform(src_pts,dst_pts)
            warped_image = birdView(edged*1.0,transform_matrix['M']) 
            element1 = cv2.getStructuringElement(0,(2,80)) 
            element2 = cv2.getStructuringElement(0,(2,2)) 
            eroded = cv2.erode(warped_image,element1)  
            dilated = cv2.dilate(warped_image,element2)    
            HoughLine_image = np.array(dilated,np.uint8)
            lines = cv2.HoughLinesP(HoughLine_image,1,np.pi/180,100,100,100,50)
            if lines is not None :
                for x1,y1,x2,y2 in lines[0]:
                    cv2.line(HoughLine_image,(x1,y1),(x2,y2),(255,0,0),1)
            for i in range(HoughLine_image.shape[1]/5):
                x = 5*i
                px = [x1,x2]
                py = [y1,y2]
                c1,c0 = np.polyfit(px, py, 1)
                y = int(c1*x + c0)
                cv2.circle(HoughLine_image, (x, y), 3, (255, 0, 0), -1)
            cv2.imshow("HoughLine_image",HoughLine_image )
            cv2.waitKey(25)
        
            k1 = 55
            k2 = 0.02
            if (480-c0)/c1<320:
                msg.drive.steering_angle = - 1/c1*k1 + k2*(480- c0)/c1                                  
            if (480-c0)/c1>320:
                msg.drive.steering_angle = - 1/c1*k1 - k2*(640-(480-c0)/c1) 
           
            '''
   
    else :
        msg.drive.speed = 0
        msg.drive.steering_angle = 0
        flag3 = 0
    print('The speed is ',msg.drive.speed)
    print('The steering angle is ',msg.drive.steering_angle)
    pub.publish(msg)
        
def detector():
 
    global pub
    global pubresult
    global Vehicle_PID
    global Reverse_PID
    Vehicle_PID = PID(0.05,0,0)#PID需要调参(2,0,0)
    Reverse_PID = PID(0.25,0,0)#PID需要调参
    rospy.init_node('camera_cmd', anonymous=False)
    #rospy.Subscriber("/usb_cam_1/image", Image, front_camera_callback, queue_size=1, buff_size=2**24)
    rospy.Subscriber("/usb_cam_1/image", Image, rear_camera_callback, queue_size=1, buff_size=2**24)
    rospy.Subscriber("/laser_control", laser_control, laser_callback, queue_size=1)
    pub = rospy.Publisher('/ackermann_cmd', AckermannDriveStamped, queue_size=1)
    rospy.spin()
 
if __name__ == '__main__':
    
    parser = argparse.ArgumentParser(description="Classify a live camera stream using an image recognition DNN.", 
                                 formatter_class=argparse.RawTextHelpFormatter, epilog=jetson.inference.imageNet.Usage() +
                                 jetson.utils.videoSource.Usage() + jetson.utils.videoOutput.Usage() + jetson.utils.logUsage())
 
    parser.add_argument("input_URI", type=str, default="", nargs='?', help="URI of the input stream")
    parser.add_argument("output_URI", type=str, default="", nargs='?', help="URI of the output stream")
    parser.add_argument("--network", type=str, default="googlenet", help="pre-trained model to load (see below for options)")
    parser.add_argument("--camera", type=str, default="0", help="index of the MIPI CSI camera to use (e.g. CSI camera 0)\nor for VL42 cameras, the /dev/video device to use.\nby default, MIPI CSI camera 0 will be used.")
    parser.add_argument("--width", type=int, default=1280, help="desired width of camera stream (default is 1280 pixels)")
    parser.add_argument("--height", type=int, default=720, help="desired height of camera stream (default is 720 pixels)")
    parser.add_argument('--headless', action='store_true', default=(), help="run without display")
    
    is_headless = ["--headless"] if sys.argv[0].find('console.py') != -1 else [""]
 
    try:
	opt = parser.parse_known_args()[0]
    except:
	print("")
	parser.print_help()
	sys.exit(0)
 
   
# load the recognition network
    net = jetson.inference.imageNet(opt.network, sys.argv)
   
    detector()

你可能感兴趣的:(智能小车,opencv入门,opencv,python,ubuntu)