视频流中目标查找-opencv-pyimagesearch

Target acquired: Finding targets in drone and quadcopter video streams using Python and OpenCV
https://pyimagesearch.com/2015/05/04/target-acquired-finding-targets-in-drone-and-quadcopter-video-streams-using-python-and-opencv/

create the “targets” that I want to detect.
goal:detect these targets in the video recorded by my Hubsan X4.

why I chose my targets to be squares?

  • 用轮廓属性来检测图像中的对象。using contour properties to detect objects in images
  • 巧妙地使用轮廓属性可以避免训练复杂的机器学习模型。A clever use of contour properties can save you from training complicated machine learning models.

the properties of a square

  • 四个顶点 A square has four vertices
  • 宽度高度大致相等,横纵比约为1。A square will have (approximately) equal width and height. Therefore, the aspect ratio, or more simply, the ratio of the width to the height of the square will be approximately 1.
  • 两个轮廓特性:凸包和坚固性the convex hull and solidity

the convex hull:凸包,计算几何中一个概念,在一个实数向量空间V中,对于给定集合X,所有包含X的凸集的交集S被称为X的凸包。在二维欧几里得空间中,凸包可想象为一条刚好包著所有点的橡皮圈。

用不严谨的话来讲,给定二维平面上的点集,凸包就是将最外层的点连接起来构成的凸多边型,它能包含点集中所有的点。

#how to find and detect targets in quadcopter video streams using Python and OpenCV.

# create the "target"
# goal:detect these targets in the video recorded by my Hubsan X4

# import the necessary packages
import argparse
import imutils
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
args = vars(ap.parse_args())
# load the video
camera = cv2.VideoCapture(args["video"])	#open video and reading
# keep looping, loop over each frame of the input video
while True:
	# grab the current frame and initialize the status text
	(grabbed, frame) = camera.read()	#从缓冲区buffer获取视频下一帧
	#return: a tuple;grabbed:boolean-是否成功读取;frame:a numpy array(N*M pixels)-帧本身
	status = "No Targets"	#指示当前帧是否找到目标
	# check to see if we have reached the end of the
	# video
	if not grabbed:
		break
	# pre-processing--逐帧
	# convert the frame to grayscale, blur it, and detect edges
	gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)	#灰度
	blurred = cv2.GaussianBlur(gray, (7, 7), 0)	#去高频噪声
	edged = cv2.Canny(blurred, 50, 150)	#边缘检测
	# find contours in the edge map
	# To distinguish between these outlines, we’ll need to leverage contour properties.
	# 为了区分这些轮廓,我们需要利用轮廓属性
	cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
		cv2.CHAIN_APPROX_SIMPLE)	#提供了边缘映射图像中轮廓区域的列表
	cnts = imutils.grab_contours(cnts)	# handle OpenCV version compatibility处理OpenCV版本兼容性
	
# ----目标检测的部分
	# loop over the contours(each of the contours)
	# 对每个轮廓开始循环,应用轮廓近似contour approximation
	# 通过减少点集来减少曲线中点的数量的算法,因此是一种近似.称为Ramer-Douglas Peucker算法,简称为拆分和合并算法
	# 该算法一般假设:曲线可以由一系列短线段近似,我们可以近似给定数量的这些线段,以减少构建曲线所需的点数。
	# 
	for c in cnts:
		# approximate the contour
		peri = cv2.arcLength(c, True)
		approx = cv2.approxPolyDP(c, 0.01 * peri, True)

		# 如果近似轮廓有4-6个点,认为是矩形,是进一步处理的候选对象
		# 理想情况4个顶点,但是由于现实世界可能有噪声--图像质量差or运动模糊
		# ensure that the approximated contour is "roughly" rectangular
		if len(approx) >= 4 and len(approx) <= 6:
			# compute the bounding box of the approximated contour and
			# use the bounding box to compute the aspect ratio
			(x, y, w, h) = cv2.boundingRect(approx)#获取轮廓的边界框
			aspectRatio = w / float(h)#计算框的纵横比

			# 计算另外两个轮廓属性
			# compute the solidity of the original contour
			area = cv2.contourArea(c)	#边界框的简单面积,边界框区域内的非零像素数除以边界框区域中的总像素数
			hullArea = cv2.contourArea(cv2.convexHull(c)) # 凸包面积
			solidity = area / float(hullArea) #原始边界框的面积和凸包的区域来计算solidity
			
			# 用上述属性确定是否在帧中找到了目标,一些限制条件
			# compute whether or not the width and height, solidity, and
			# aspect ratio of the contour falls within appropriate bounds
			keepDims = w > 25 and h > 25	#min;确保帧中过滤掉小的随机伪影
			keepSolidity = solidity > 0.9	#solidity的筛选
			keepAspectRatio = aspectRatio >= 0.8 and aspectRatio <= 1.2	#横纵比的区间,近似正方形

			# ensure that the contour passes all our tests
			if keepDims and keepSolidity and keepAspectRatio:
				
				# draw an outline around the target and update the status
				# text
				cv2.drawContours(frame, [approx], -1, (0, 0, 255), 4)	#绘制近似轮廓的边界框区域
				status = "Target(s) Acquired"	#更新状态
				
				# compute the center of the contour region and draw the crosshairs
				# 计算边界框中心,使用其坐标在目标上绘制‘十字光标’
				M = cv2.moments(approx)	
				(cX, cY) = (int(M["m10"] // M["m00"]), int(M["m01"] // M["m00"]))
				(startX, endX) = (int(cX - (w * 0.15)), int(cX + (w * 0.15)))
				(startY, endY) = (int(cY - (h * 0.15)), int(cY + (h * 0.15)))
				cv2.line(frame, (startX, cY), (endX, cY), (0, 0, 255), 3)
				cv2.line(frame, (cX, startY), (cX, endY), (0, 0, 255), 3)
	# draw the status text on the frame
	cv2.putText(frame, status, (20, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
		(0, 0, 255), 2)	#在框架左上角绘制‘状态’信息
	
	# show the frame and record if a key is pressed
	cv2.imshow("Frame", frame)
	key = cv2.waitKey(1) & 0xFF
	
	# if the 'q' key is pressed, stop the loop
	if key == ord("q"):
		break

# cleanup the camera and close any open windows
camera.release()	#释放指向视频的指针
cv2.destroyAllWindows()	#关闭所有窗口

#https://pyimagesearch.com/2015/05/04/target-acquired-finding-targets-in-drone-and-quadcopter-video-streams-using-python-and-opencv/
# 运行:python drone.py --video Demo.mp4

你可能感兴趣的:(opencv,opencv,人工智能,计算机视觉)