We are pleased to announce the VisDrone2020 Object Detection in Images Challenge (Task 1). This competition is designed to push the state-of-the-art in object detection with drone platform forward. Teams are required to predict the bounding boxes of objects of ten predefined classes (i.e., pedestrian, person, car, van, bus, truck, motor, bicycle, awning-tricycle, and tricycle) with real-valued confidences. Some rarely occurring special vehicles (e.g., machineshop truck, forklift truck, and tanker) are ignored in evaluation.
The challenge containing 10,209 static images (6,471 for training, 548 for validation and 3,190 for testing) captured by drone platforms in different places at different height, are available on the download page. We manually annotate the bounding boxes of different categories of objects in each image. In addition, we also provide two kinds of useful annotations, occlusion ratio and truncation ratio. Specifically, we use the fraction of objects being occluded to define the occlusion ratio. The truncation ratio is used to indicate the degree of object parts appears outside a frame. If an object is not fully captured within a frame, we annotate the bounding box across the frame boundary and estimate the truncation ratio based on the region outside the image. It is worth mentioning that a target is skipped during evaluation if its truncation ratio is larger than 50%. Annotations on the training and validation sets are publicly available.
VOC2007000001.jpg # 文件名
The VOC2007 DatabasePASCAL VOC2007flickr341012865Fried CamelsJinky the Fruit Bat # 图像尺寸, 用于对 bbox 左上和右下坐标点做归一化操作
35350030 # 是否用于分割
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
import sys
sets=[('2018','train'),('2018','val')]
classes =["a","b","c","d"]# soft link your VOC2018 under here
root_dir = sys.argv[1]defconvert(size, box):
dw =1./(size[0])
dh =1./(size[1])
x =(box[0]+ box[1])/2.0-1
y =(box[2]+ box[3])/2.0-1
w = box[1]- box[0]
h = box[3]- box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return(x,y,w,h)defconvert_annotation(year, image_id):
in_file =open(os.path.join(root_dir,'VOC%s/Annotations/%s.xml'%(year, image_id)))
out_file =open(os.path.join(root_dir,'VOC%s/labels/%s.txt'%(year, image_id)),'w')
tree=ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w =int(size.find('width').text)
h =int(size.find('height').text)for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls notin classes orint(difficult)==1:continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b =(float(xmlbox.find('xmin').text),float(xmlbox.find('xmax').text),float(xmlbox.find('ymin').text),float(xmlbox.find('ymax').text))
bb = convert((w,h), b)
out_file.write(str(cls_id)+" "+" ".join([str(a)for a in bb])+'\n')
wd = getcwd()for year, image_set in sets:
labels_target = os.path.join(root_dir,'VOC%s/labels/'%(year))print('labels dir to save: {}'.format(labels_target))ifnot os.path.exists(labels_target):
os.makedirs(labels_target)
image_ids =open(os.path.join(root_dir,'VOC{}/ImageSets/Main/{}.txt'.format(year, image_set))).read().strip().split()
list_file =open(os.path.join(root_dir,'%s_%s.txt'%(year, image_set)),'w')for image_id in image_ids:
img_f = os.path.join(root_dir,'VOC%s/JPEGImages/%s.jpg\n'%(year, image_id))
list_file.write(os.path.abspath(img_f))
convert_annotation(year, image_id)
list_file.close()print('done.')
/*
*开发触发器
*/
--得到日期是周几
select to_char(sysdate+4,'DY','nls_date_language=AMERICAN') from dual;
select to_char(sysdate,'DY','nls_date_language=AMERICAN') from dual;
--建立BEFORE语句触发器
CREATE O
下面给大家整理了一些vim NERDTree的常用快捷键了,这里几乎包括了所有的快捷键了,希望文章对各位会带来帮助。
切换工作台和目录
ctrl + w + h 光标 focus 左侧树形目录ctrl + w + l 光标 focus 右侧文件显示窗口ctrl + w + w 光标自动在左右侧窗口切换ctrl + w + r 移动当前窗口的布局位置
o 在已有窗口中打开文件、目录或书签,并跳