openMV镜头下的STM32小车寻迹原理及其调试注意事项(附openMV代码)

此篇文章是自己在学习制作寻迹小车中遇到的问题以及解决方法,写出来供同样的人参考参考…
-------------->直接开始------------>
STM32各类小车工作原理及学习
http://www.yahboom.com/study/bc-32

简介:循迹小车由三轮或四轮小车和摄像头两大部分组成,通过镜头识别路径,将其得到的图像做处
理并发送给小车,小车对应做出动作。
寻迹小车核心组成:openMV和STM3F1。
涉及主要知识:STM32相关知识(学习视频 https://ke.qq.com/course/279403);
openMV相关知识(学习网站 http://book.myopenmv.com/quick-start/ide-tutorial.html);
USART通信(运用在摄像头与小车之间通信);
SPI通信(用在OLED显示);
直流电机有关资料;
直流电机驱动TB6612FNG(http://tech.hqew.com/circuit_1476430);
等等。

掌握具体知识后便可搭建循迹小车了。
然而并不是一帆风顺,肯定会漏洞百出…所以在搭建时候就细心耐心,一点一点解决问题。搭建时候的注意事项:
1、在用杜邦线连接小车各部分模块时候,首先确保杜邦线没有坏;
2、杜邦线也不能有松动或者没有插紧的情况,不然很容易造成接触不良,在调试陈旭时候也会被误认为程序问题而浪费时间;
3、接的电压要按照各器件额定值接入。

到现在小车基本上是搭建好了(除了摄像头部分),先启动小车,让它能够跑起来,还要能够跑直线,特别直!
同时还要时不时地检查一下接线是否有松动的情况!!
一切都OK了,那就装上openMV进行数据对接。

有几点软件部分需要注意的事项:
1、注意使用USART通信要明确自己所用的协议,否则会出现不是自己预想的数据;
2、在我使用openMV时候,左右的偏移量为0到45 和 -45到0,但是将该数据传输回STM32时候负数段数据不正常。解决方法为将左右偏移量的数据范围加上相应的数字,使其左右偏移量范围里没有负数数据。

搞好以后迫不及待的要跑一下了,但是刚按下开关,小车按照轨迹过弯道过到一半时候乱跑了,然后又是瞎跑。。。。做到这一步又有几点需要注意,可耐心读完:
1、 注意,这几点很重要!!!
2、小车能够简单寻迹,但又不能完全寻迹时候,可以把摄像头从小车上拿下来,固定在一个摄像头倾斜角和高度相同的简易支架上,连接到电脑上观看小车视野,,在OPENMV IDE里面查看画面,是否在过弯到一半时候找不到视野。若是这个问题则根据自己的需求配合程序进行相应的调整;
3、 小车向左转很顺利,向右转会“出轨”,或是左转到一半会瞎跑。首先要确保摄像头摆放位置是正的,接线是否有松动而影响数据传输,然后确保硬件没问题时候检查偏移量的数据。通过查看偏移时候openMV处理出来的数据,发现左边过弯很顺利是其偏移量正常,而右边偏移量不足,导致其过弯一半“出轨”。适当增加右转偏移量即可解决。
4、 有时会出现数据传输错乱,但是硬件又没有问题,这时候应该把数据传输有关的模块单独接电源或是原理接线群,其他的线路可能会有些许的影响(没有固然最好,但是值得一试)。
5、 测试时候-务-必-一-定-必-须-要在光线好的地方测试,很重要!!!!因为光线弱会影响到识别路径,从而导致乱跑、跑到旁边的路径上,此时又会误以为是程序出了问题,又去调试程序浪费时间。程序调试N多次,不如灯光照一次!!

好了,到这里你的小车应该会跑的很顺利了
最后附上openMV的Python程序

# Black Grayscale Line Following Example
#
# Making a line following robot requires a lot of effort. This example script
# shows how to do the machine vision part of the line following robot. You
# can use the output from this script to drive a differential drive robot to
# follow a line. This script just generates a single turn value that tells
# your robot to go left or right.
#
# For this script to work properly you should point the camera at a line at a
# 45 or so degree angle. Please make sure that only the line is within the
# camera's field of view.

import sensor, image, time, math#调用声明
from pyb import UART

# Tracks a black line. Use [(128, 255)] for a tracking a white line.
GRAYSCALE_THRESHOLD = [(0, 64)]
#设置阈值,如果是黑线,GRAYSCALE_THRESHOLD = [(0, 64)];
#如果是白线,GRAYSCALE_THRESHOLD = [(128,255)]


# Each roi is (x, y, w, h). The line detection algorithm will try to find the
# centroid of the largest blob in each roi. The x position of the centroids
# will then be averaged with different weights where the most weight is assigned
# to the roi near the bottom of the image and less to the next roi and so on.
ROIS = [ # [ROI, weight]
        (0, 100, 160, 20, 0.7), # You'll need to tweak the weights for you app
        (0, 050, 160, 20, 0.3), # depending on how your robot is setup.
        (0, 000, 160, 20, 0.1)
       ]
#roi代表三个取样区域,(x,y,w,h,weight),代表左上顶点(x,y)宽高分别为w和h的矩形,
#weight为当前矩形的权值。注意本例程采用的QQVGA图像大小为160x120,roi即把图像横分成三个矩形。
#三个矩形的阈值要根据实际情况进行调整,离机器人视野最近的矩形权值要最大,
#如上图的最下方的矩形,即(0, 100, 160, 20, 0.7)

# Compute the weight divisor (we're computing this so you don't have to make weights add to 1).
weight_sum = 0 #权值和初始化
for r in ROIS: weight_sum += r[4] # r[4] is the roi weight.
#计算权值和。遍历上面的三个矩形,r[4]即每个矩形的权值。

# Camera setup...
sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # use grayscale.
sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed.
sensor.skip_frames(30) # Let new settings take affect.
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
#关闭白平衡
clock = time.clock() # Tracks FPS.

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.
    uart = UART(3,19200)
    uart.init(19200,bits=8,parity=None,stop=1)#init with given parameters

    centroid_sum = 0
    #利用颜色识别分别寻找三个矩形区域内的线段
    for r in ROIS:
        blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=r[0:4], merge=True)
        # r[0:4] is roi tuple.
        #找到视野中的线,merge=true,将找到的图像区域合并成一个

        #目标区域找到直线
        if blobs:
            # Find the index of the blob with the most pixels.
            most_pixels = 0
            largest_blob = 0
            for i in range(len(blobs)):
            #目标区域找到的颜色块(线段块)可能不止一个,找到最大的一个,作为本区域内的目标直线
                if blobs[i].pixels() > most_pixels:
                    most_pixels = blobs[i].pixels()
                    #merged_blobs[i][4]是这个颜色块的像素总数,如果此颜色块像素总数大于                     #most_pixels,则把本区域作为像素总数最大的颜色块。更新most_pixels和largest_blob
                    largest_blob = i

            # Draw a rect around the blob.
            img.draw_rectangle(blobs[largest_blob].rect())
            img.draw_rectangle((0,0,30, 30))
            #将此区域的像素数最大的颜色块画矩形和十字形标记出来
            img.draw_cross(blobs[largest_blob].cx(),
                           blobs[largest_blob].cy())

            centroid_sum += blobs[largest_blob].cx() * r[4] # r[4] is the roi weight.
            #计算centroid_sum,centroid_sum等于每个区域的最大颜色块的中心点的x坐标值乘本区域的权值

    center_pos = (centroid_sum / weight_sum) # Determine center of line.
    #中间公式

    # Convert the center_pos to a deflection angle. We're using a non-linear
    # operation so that the response gets stronger the farther off the line we
    # are. Non-linear operations are good to use on the output of algorithms
    # like this to cause a response "trigger".
    deflection_angle = 0
    #机器人应该转的角度

    # The 80 is from half the X res, the 60 is from half the Y res. The
    # equation below is just computing the angle of a triangle where the
    # opposite side of the triangle is the deviation of the center position
    # from the center and the adjacent side is half the Y res. This limits
    # the angle output to around -45 to 45. (It's not quite -45 and 45).
    deflection_angle = -math.atan((center_pos-80)/60)
    #角度计算.80 60 分别为图像宽和高的一半,图像大小为QQVGA 160x120.
    #注意计算得到的是弧度值

    # Convert angle in radians to degrees.
    deflection_angle = math.degrees(deflection_angle)
    #将计算结果的弧度值转化为角度值
    A=deflection_angle
    print("Turn Angle: %d" % int (A))#输出时强制转换类型为int
    #print("Turn Angle: %d" % char (A))
    # Now you have an angle telling you how much to turn the robot by which
    # incorporates the part of the line nearest to the robot and parts of
    # the line farther away from the robot for a better prediction.
    print("Turn Angle: %f" % deflection_angle)
    #将结果打印在terminal中
    uart_buf = bytearray([int (A)])
    #uart_buf = bytearray([char (A)])
    #uart.write(uart_buf)#区别于uart.writechar是输出字符型,这个函数可以输出int型
    uart.write(uart_buf)
    uart.writechar(0x41)#通信协议帧尾
    uart.writechar(0x42)

    time.sleep(10)#延时
    print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

你可能感兴趣的:(openMV镜头下的STM32小车寻迹原理及其调试注意事项(附openMV代码))