android智慧停车场代码,计算机视觉实战(十三)停车场车位识别(附完整代码)

要做以下几件事情:

一共有多少辆车。

有多少个空余的车位。

哪个停车位被占用了,哪个停车位没有被占用。

读取图像:

image

拿到图像之后,我们需要将其预处理,低于120,或者高于255的都处理为0。

def select_rgb_white_yellow(self,image):

#过滤掉背景

lower = np.uint8([120, 120, 120])

upper = np.uint8([255, 255, 255])

# lower_red和高于upper_red的部分分别变成0,lower_red~upper_red之间的值变成255,相当于过滤背景

white_mask = cv2.inRange(image, lower, upper)

self.cv_show('white_mask',white_mask)

masked = cv2.bitwise_and(image, image, mask = white_mask)

self.cv_show('masked',masked)

return masked

image

然后再将其与原始图像做与操作,这样的话,只有原始图像是255的像素点留下来了。

image

然后再做灰度处理,再做边缘检测:

image

手动选择有效区域:

def select_region(self,image):

"""

手动选择区域

"""

# first, define the polygon by vertices

rows, cols = image.shape[:2]

pt_1 = [cols*0.05, rows*0.90]

pt_2 = [cols*0.05, rows*0.70]

pt_3 = [cols*0.30, rows*0.55]

pt_4 = [cols*0.6, rows*0.15]

pt_5 = [cols*0.90, rows*0.15]

pt_6 = [cols*0.90, rows*0.90]

vertices = np.array([[pt_1, pt_2, pt_3, pt_4, pt_5, pt_6]], dtype=np.int32)

point_img = image.copy()

point_img = cv2.cvtColor(point_img, cv2.COLOR_GRAY2RGB)

for point in vertices[0]:

cv2.circle(point_img, (point[0],point[1]), 10, (0,0,255), 4)

self.cv_show('point_img',point_img)

return self.filter_region(image, vertices)

image

之后做一个mask填充,然后将其分割出来:

def filter_region(self,image, vertices):

"""

剔除掉不需要的地方

"""

mask = np.zeros_like(image)

if len(mask.shape)==2:

cv2.fillPoly(mask, vertices, 255)

self.cv_show('mask', mask)

return cv2.bitwise_and(image, mask)

image

image

再利用霍夫变换检测直线,再过滤一些:

def hough_lines(self,image):

#输入的图像需要是边缘检测后的结果

#minLineLengh(线的最短长度,比这个短的都被忽略)和MaxLineCap(两条直线之间的最大间隔,小于此值,认为是一条直线)

#rho距离精度,theta角度精度,threshod超过设定阈值才被检测出线段

return cv2.HoughLinesP(image, rho=0.1, theta=np.pi/10, threshold=15, minLineLength=9, maxLineGap=4)

def draw_lines(self,image, lines, color=[255, 0, 0], thickness=2, make_copy=True):

# 过滤霍夫变换检测到直线

if make_copy:

image = np.copy(image)

cleaned = []

for line in lines:

for x1,y1,x2,y2 in line:

if abs(y2-y1) <=1 and abs(x2-x1) >=25 and abs(x2-x1) <= 55:

cleaned.append((x1,y1,x2,y2))

cv2.line(image, (x1, y1), (x2, y2), color, thickness)

print(" No lines detected: ", len(cleaned))

return image

image

def identify_blocks(self,image, lines, make_copy=True):

if make_copy:

new_image = np.copy(image)

#Step 1: 过滤部分直线

cleaned = []

for line in lines:

for x1,y1,x2,y2 in line:

if abs(y2-y1) <=1 and abs(x2-x1) >=25 and abs(x2-x1) <= 55:

cleaned.append((x1,y1,x2,y2))

#Step 2: 对直线按照x1进行排序

import operator

list1 = sorted(cleaned, key=operator.itemgetter(0, 1))

#Step 3: 找到多个列,相当于每列是一排车

clusters = {}

dIndex = 0

clus_dist = 10

for i in range(len(list1) - 1):

distance = abs(list1[i+1][0] - list1[i][0])

if distance <= clus_dist:

if not dIndex in clusters.keys(): clusters[dIndex] = []

clusters[dIndex].append(list1[i])

clusters[dIndex].append(list1[i + 1])

else:

dIndex += 1

#Step 4: 得到坐标

rects = {}

i = 0

for key in clusters:

all_list = clusters[key]

cleaned = list(set(all_list))

if len(cleaned) > 5:

cleaned = sorted(cleaned, key=lambda tup: tup[1])

avg_y1 = cleaned[0][1]

avg_y2 = cleaned[-1][1]

avg_x1 = 0

avg_x2 = 0

for tup in cleaned:

avg_x1 += tup[0]

avg_x2 += tup[2]

avg_x1 = avg_x1/len(cleaned)

avg_x2 = avg_x2/len(cleaned)

rects[i] = (avg_x1, avg_y1, avg_x2, avg_y2)

i += 1

print("Num Parking Lanes: ", len(rects))

#Step 5: 把列矩形画出来

buff = 7

for key in rects:

tup_topLeft = (int(rects[key][0] - buff), int(rects[key][1]))

tup_botRight = (int(rects[key][2] + buff), int(rects[key][3]))

cv2.rectangle(new_image, tup_topLeft,tup_botRight,(0,255,0),3)

return new_image, rects

按列划分区域:

image

再划分更细:

image

之后再构建神经网络,对方框里面的图片进行分类。

image

完整代码 公众号后台回复 停车场车位识别 。

我的微信公众号名称:深度学习与先进智能决策

微信公众号ID:MultiAgent1024

公众号介绍:主要研究分享深度学习、机器博弈、强化学习等相关内容!期待您的关注,欢迎一起学习交流进步!

你可能感兴趣的:(android智慧停车场代码)