OpenMV识别色块与STM2F4通过串口通信

花了三天时间学习了一下OpenMV的简单使用,在这写个博客记录一下,并且上传自己的代码,以方便交流学习。第一次发帖,不周之处见谅。

首先概括一下我实现的功能:OpenMV识别红色色块,将所识别到的红色色块的中心坐标通过串口发送给单片机,单片机接收。
再说一下我的硬件:单片机使用STM32F407Vet6,OpenMV使用M7型号的,也就是openmv的主控是F7系列的。本文涉及到的内容任何型号的Openmv就可以参照,因为就用了一个所有openmv都有的串口。
再说一下资源使用:单片机使用串口一,波特率115200.Openmv使用串口三,波特率也是115200.
下面贴上我的代码。

# Single Color RGB565 Blob Tracking Example
#
# This example shows off single color RGB565 tracking using the OpenMV Cam.


import sensor, image, time, math,pyb
from pyb import UART


threshold_index = 0      # 0 for red, 1 for green, 2 for blue


# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green/blue things. You may wish to tune them...
thresholds = [(33, 60, 30, 65, 0, 30 ), # generic_red_thresholds
              (30, 100, -64, -8, -32, 32), # generic_green_thresholds
              (0, 30, 0, 64, -128, 0), # generic_blue_thresholds
              (79,95,-13,0,25,36)]    #yellow
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()
uart = UART(3,115200)


# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.


while(True):
    clock.tick()     #用来跟踪FPS
    img = sensor.snapshot()
    for blob in img.find_blobs([thresholds[threshold_index],thresholds[3]], pixels_threshold=200, area_threshold=200, merge=True):
        # These values depend on the blob not being circular - otherwise they will be shaky.
        if blob.elongation() > 0.5:     #返回一个0-1的值代表目标的长度  直线将返回1
            img.draw_edges(blob.min_corners(), color=(255,0,0))
            img.draw_line(blob.major_axis_line(), color=(0,255,0))
            img.draw_line(blob.minor_axis_line(), color=(0,0,255))
        # These values are stable all the time.
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
        # Note - the blob rotation is unique to 0-180 only.
        img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
        if blob.code() == 1:
            print(clock.fps())
            print(blob.cx(),blob.cy())
            img_data = bytearray([0x24,0x23,blob.cx()//256,blob.cx()-256,blob.cy()//256,blob.cy()-256])   #0x24,0x23帧头 '$' '#'
                                                                                                          #串口发送数组 将大于256的数分开便于发送
            uart.write(img_data)
            

stm32接收代码:
这是接收时定义的数组

unsigned int rec_data_buff[50]={0}; //数据 

串口配置函数就不贴了,不知道为啥把整个.c文件一全复制这博客就出问题,下面我就贴一下中断里面的数据接收部分就行,串口配置函数随便配一下能用就行,无特殊要求

void USART1_IRQHandler(void)
{  
 static u8 i=0;
  if(USART_GetITStatus(USART1,USART_IT_RXNE))    //判断是否为串口一 RXEN类型中断
 {  
  rec_data_buff[i]=USART_ReceiveData(USART1);//如果是  则就去接收  
  i++;
  if(i == 2)
  {
   if((rec_data_buff[0] != 0x24) || ((rec_data_buff[1] != 0x23)))      //帧头  先'$'  后'#'
   {
    i=0;
   }
  }
  if(i == recBuff_length)     //一帧数据接收完毕
  {
   i=0;                        //从头开始接收
  }
  rec_data_buff[recBuff_length] = rec_data_buff[2]*256+rec_data_buff[3];      //blob.cx() 的值
  rec_data_buff[recBuff_length+1] = rec_data_buff[4]*256+rec_data_buff[5];    //blob.cy() 的值
  USART_ClearFlag(USART1,USART_FLAG_RXNE);
 }
} 

到这就差不多啦,如果有啥问题的话可以q我互相帮助。
(写这个博客莫名就出现问题,第一次果然有点难搞)

你可能感兴趣的:(色块识别,Openmv串口通信)