基于mask r-cnn的铁路隧道裂缝检测 2020-11-05

https://github.com/dyh/unbox_detecting_tunnel_fissure
https://www.bilibili.com/video/BV1DT4y1F7yG


Unbox AI

AI开箱

  • unbox opensource projects and products of Artificial Intelligence

人工智能开源项目和产品开箱

welcome to subscribe my channel

欢迎订阅我的频道

  • youtube channel
  • bilibili b站频道

[Unbox AI] Railway Tunnel Fissure Detection based on Mask R-CNN and Detectron2

[AI开箱] 基于mask r-cnn和detectron2的 铁路隧道裂缝检测

video

视频

  • youtube

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TrNoc80x-1604742216016)(https://github.com/dyh/unbox_detecting_tunnel_fissure/blob/main/cover.png?raw=true)]

  • b站视频请点这里

plese use google colab to load tunnel_fissure.ipynb and follow the steps in tunnel_fissure.ipynb file

请使用 google colab 加载 tunnel_fissure.ipynb 然后按照 tunnel_fissure.ipynb 文件中的步骤进行








Unbox AI
AI开箱

unbox opensource projects and products of Artificial Intelligence
人工智能开源项目和产品开箱

welcome to subscribe my channel
欢迎订阅我的频道

youtube channel
bilibili b站频道
Unbox ‘detecting tunnel fissure’
开箱 ‘隧道裂缝检测’

video
视频

youtube video
Railway Tunnel Fissure Detection

b站视频请点这里
1.mount google drive folder
挂载 google drive 云端硬盘的文件夹

In [ ]:
from google.colab import drive
drive.mount(’/content/drive’)
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
2.clone project files to google drive folder
克隆项目文件到 google drive 云端硬盘的目录

In [ ]:
!git clone ‘https://github.com/dyh/unbox_detecting_tunnel_fissure.git’ ‘/content/drive/My Drive/tunnel_fissure’
Cloning into ‘/content/drive/My Drive/tunnel_fissure’…
remote: Enumerating objects: 17, done.
remote: Counting objects: 100% (17/17), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 68 (delta 7), reused 15 (delta 6), pack-reused 51
Unpacking objects: 100% (68/68), done.
Checking out files: 100% (39/39), done.
3.install dependencies
安装依赖项

In [ ]:

install dependencies:

!pip install pyyaml5.1 ‘pycocotools>=2.0.1’
!pip install torch
1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

import torch, torchvision
print(torch.version, torch.cuda.is_available())
!gcc --version

opencv is pre-installed on colab

install detectron2: (Colab has CUDA 10.1 + torch 1.6)

See https://detectron2.readthedocs.io/tutorials/install.html for instructions

assert torch.version.startswith(“1.6”)
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/index.html
Collecting pyyaml5.1
Downloading https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz (274kB)
|████████████████████████████████| 276kB 9.2MB/s
Requirement already satisfied: pycocotools>=2.0.1 in /usr/local/lib/python3.6/dist-packages (2.0.2)
Requirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools>=2.0.1) (50.3.2)
Requirement already satisfied: matplotlib>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools>=2.0.1) (3.2.2)
Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.6/dist-packages (from pycocotools>=2.0.1) (0.29.21)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools>=2.0.1) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools>=2.0.1) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools>=2.0.1) (1.2.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools>=2.0.1) (2.8.1)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools>=2.0.1) (1.18.5)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=2.1.0->pycocotools>=2.0.1) (1.15.0)
Building wheels for collected packages: pyyaml
Building wheel for pyyaml (setup.py) … done
Created wheel for pyyaml: filename=PyYAML-5.1-cp36-cp36m-linux_x86_64.whl size=44075 sha256=0b640dd2ab72f2614b49990ae7e37a9db87663351cdc43386ad733e16746d729
Stored in directory: /root/.cache/pip/wheels/ad/56/bc/1522f864feb2a358ea6f1a92b4798d69ac783a28e80567a18b
Successfully built pyyaml
Installing collected packages: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed pyyaml-5.1
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Requirement already satisfied: torch
1.6.0+cu101 in /usr/local/lib/python3.6/dist-packages (1.6.0+cu101)
Requirement already satisfied: torchvision0.7.0+cu101 in /usr/local/lib/python3.6/dist-packages (0.7.0+cu101)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch
1.6.0+cu101) (0.16.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch1.6.0+cu101) (1.18.5)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision
0.7.0+cu101) (7.0.0)
1.6.0+cu101 True
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright © 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/index.html
Collecting detectron2
Downloading https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/detectron2-0.2.1%2Bcu101-cp36-cp36m-linux_x86_64.whl (6.6MB)
|████████████████████████████████| 6.6MB 606kB/s
Collecting fvcore>=0.1.1
Downloading https://files.pythonhosted.org/packages/e7/37/82dc217199c10288f3d05f50f342cb270ff2630841734bdfa40b54b0f8bc/fvcore-0.1.2.post20201104.tar.gz
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from detectron2) (0.16.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from detectron2) (3.2.2)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.6/dist-packages (from detectron2) (2.3.0)
Requirement already satisfied: pydot in /usr/local/lib/python3.6/dist-packages (from detectron2) (1.3.0)
Requirement already satisfied: tabulate in /usr/local/lib/python3.6/dist-packages (from detectron2) (0.8.7)
Collecting yacs>=0.1.6
Downloading https://files.pythonhosted.org/packages/38/4f/fe9a4d472aa867878ce3bb7efb16654c5d63672b86dc0e6e953a67018433/yacs-0.1.8-py3-none-any.whl
Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.6/dist-packages (from detectron2) (1.1.0)
Collecting mock
Downloading https://files.pythonhosted.org/packages/cd/74/d72daf8dff5b6566db857cfd088907bb0355f5dd2914c4b3ef065c790735/mock-4.0.2-py3-none-any.whl
Requirement already satisfied: pycocotools>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from detectron2) (2.0.2)
Requirement already satisfied: tqdm>4.29.0 in /usr/local/lib/python3.6/dist-packages (from detectron2) (4.41.1)
Collecting Pillow>=7.1
Downloading https://files.pythonhosted.org/packages/5f/19/d4c25111d36163698396f93c363114cf1cddbacb24744f6612f25b6aa3d0/Pillow-8.0.1-cp36-cp36m-manylinux1_x86_64.whl (2.2MB)
|████████████████████████████████| 2.2MB 16.7MB/s
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/dist-packages (from detectron2) (1.3.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from fvcore>=0.1.1->detectron2) (1.18.5)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.6/dist-packages (from fvcore>=0.1.1->detectron2) (5.1)
Collecting portalocker
Downloading https://files.pythonhosted.org/packages/89/a6/3814b7107e0788040870e8825eebf214d72166adf656ba7d4bf14759a06a/portalocker-2.0.0-py2.py3-none-any.whl
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2) (1.2.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2) (2.8.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2) (0.10.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (1.7.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (1.15.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (1.0.1)
Requirement already satisfied: wheel>=0.26; python_version >= “3” in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (0.35.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (3.3.2)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (0.4.1)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (3.12.4)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (1.33.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (50.3.2)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (1.17.2)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (0.10.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2) (2.23.0)
Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.6/dist-packages (from pycocotools>=2.0.1->detectron2) (0.29.21)
Requirement already satisfied: importlib-metadata; python_version < “3.8” in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard->detectron2) (2.0.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2) (1.3.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= “3” in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2) (4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2) (4.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2) (0.2.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (1.24.3)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < “3.8”->markdown>=2.6.8->tensorboard->detectron2) (3.3.1)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2) (3.1.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= “3”->google-auth<2,>=1.6.3->tensorboard->detectron2) (0.4.8)
Building wheels for collected packages: fvcore
Building wheel for fvcore (setup.py) … done
Created wheel for fvcore: filename=fvcore-0.1.2.post20201104-cp36-none-any.whl size=44419 sha256=9293e147a63b458f7c9b1fd3f2caccee630bd12b5e30720881c8b29a01e1aea7
Stored in directory: /root/.cache/pip/wheels/ec/4d/40/4077356fe02ef345791713eabede5ed63afe7d613b016694d1
Successfully built fvcore
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you’ll have imgaug 0.2.9 which is incompatible.
Installing collected packages: yacs, portalocker, Pillow, fvcore, mock, detectron2
Found existing installation: Pillow 7.0.0
Uninstalling Pillow-7.0.0:
Successfully uninstalled Pillow-7.0.0
Successfully installed Pillow-8.0.1 detectron2-0.2.1+cu101 fvcore-0.1.2.post20201104 mock-4.0.2 portalocker-2.0.0 yacs-0.1.8
please make sure of that you have click the [ RESTART RUNTIME ] button -> [ YES ] button to restart colab runtime

请确认您点击了 [ RESTART RUNTIME ] 按钮 -> [ 是 ] 按钮,来重新加载 colab 运行时

4.annotate your own training sample (optional)
标注您自己的训练样本(可选)

here’s how to annotate sample from 0 to 1, if you don’t care about annotation, you can ignore this section.
这里介绍如何从0到1进行样本标注,如果您对样本标注不感兴趣,可以忽略这一章节。

i have install the ‘google drive backup and sync’ app, which automatically synchronizes the google drive files on my machine for easy annotation. you can download it at https://www.google.com/drive/download/
我在本地电脑安装了 ‘google drive备份和同步’ 应用,便于同步标注文件到 google drive云端硬盘,您可以在 https://www.google.com/drive/download/ 下载这个app

in the google drive folder, go to ‘/content/drive/My Drive/tunnel_fissure/images/train’ folder, and backup the origin ‘via_region_data.json’ file, change its name to ‘via_region_data_bak.json’
在 google drive云端硬盘 访问 '/content/drive/My Drive/tunnel_fissure/images/train’文件夹,将原始文件 ‘via_region_data.json’ 改名为 ‘via_region_data_bak.json’ 用于备份。

go to VGG Image Annotator (VIA for short) website http://www.robots.ox.ac.uk/~vgg/software/via/via_demo.html
访问 VGG图片标注工具(简称 VIA)的网站 http://www.robots.ox.ac.uk/~vgg/software/via/via_demo.html

remove 2 demo images in ‘VIA’, swan and ‘The Death of Socrates’
从 VIA 中移除2个默认的图片,天鹅和《苏格拉底之死》

add your images to ‘VIA’, now we add images from train folder
将 train 目录下的图片添加到 ‘VIA’

config attributes of Region Attributes
设置标注区域的属性

remove ‘image_quality’ attribute

移除 ‘image_quality’ 图像质量 属性

change default value of ‘name’ attribute, from ‘not_defined’ to ‘fissure’

改变 ‘name’ 属性的默认值,从 ‘not_defined’ 改为 ‘fissure’

add ‘fissure’ and ‘water’ to ‘type’ attribute, remove other values

将 ‘fissure’ 和 ‘water’ 添加到 ‘type’ 属性中,并移除其他值

annotate some fissure regions and water regions

标注一些裂缝区域和渗水区域

click [ Project -> Save ] to save project file ‘project.json’

点击 [ Project -> Save ] 保存项目文件,以便下次继续标注时使用

click [ Annotation -> Export Annotations (as json) ] to export json file ‘data_json.josn’, change its name to ‘via_region_data.json’

点击 [ Annotation -> Export Annotations (as json) ] 导出标注数据,并且重命名为 ‘via_region_data.json’ 文件

we could use ‘google drive backup and sync’ app to sync ‘via_region_data.json’ file from download folder to ‘/content/drive/My Drive/tunnel_fissure/images/train’ folder

我们可以使用 ‘google drive 备份和同步’ app,将 ‘via_region_data.json’ 文件从本地同步到 google drive云端硬盘的 ‘/content/drive/My Drive/tunnel_fissure/images/train’ 目录

now you have your own training dataset ‘via_region_data.json’
现在您拥有了自己的训练数据集 ‘via_region_data.json’

5.import modules
导入模块

In [ ]:

Some basic setup:

Setup detectron2 logger

import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()

import some common libraries

import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow

import some common detectron2 utilities

from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
6.register train & val dataset
注册训练数据集和验证数据集

In [ ]:
from detectron2.structures import BoxMode

def get_fissures_dicts(img_dir):
json_file = os.path.join(img_dir, “via_region_data.json”)
with open(json_file) as f:
imgs_anns = json.load(f)

dataset_dicts = []
for idx, v in enumerate(imgs_anns.values()):
    record = {}
    
    filename = os.path.join(img_dir, v["filename"])
    height, width = cv2.imread(filename).shape[:2]
    
    record["file_name"] = filename
    record["image_id"] = idx
    record["height"] = height
    record["width"] = width

    list_annos = v["regions"]

    objs = []
    # for _, anno in annos.items():
    for dict_anno in list_annos:
        # assert not anno["region_attributes"]
        anno = dict_anno["shape_attributes"]
        px = anno["all_points_x"]
        py = anno["all_points_y"]
        poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
        poly = [p for x in poly for p in x]

        # get type from region_attributes to set different category_id
        attr1 = dict_anno["region_attributes"]
        type1 = attr1["type"]

        if type1 == "fissure":
            cat_id = 0
        elif type1 == "water":
            cat_id = 1
        else:
            cat_id = 0

        obj = {
            "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
            "bbox_mode": BoxMode.XYXY_ABS,
            "segmentation": [poly],
            "category_id": cat_id,
        }
        objs.append(obj)
    record["annotations"] = objs
    dataset_dicts.append(record)
return dataset_dicts

for d in [“train”, “val”]:
DatasetCatalog.register(“fissures_” + d, lambda d=d: get_fissures_dicts(os.path.join("/content/drive/My Drive/tunnel_fissure/images", d)))
MetadataCatalog.get(“fissures_” + d).set(thing_classes=[“fissure”,“water”])

fissures_metadata = MetadataCatalog.get(“fissures_train”)
7.preview train dataset
预览训练数据集

In [ ]:
dataset_dicts = get_fissures_dicts("/content/drive/My Drive/tunnel_fissure/images/train")
for d in random.sample(dataset_dicts, 3):
img = cv2.imread(d[“file_name”])
visualizer = Visualizer(img[:, :, ::-1], metadata=fissures_metadata, scale=0.5)
out = visualizer.draw_dataset_dict(d)
cv2_imshow(out.get_image()[:, :, ::-1])
8.train a model
训练模型

In [ ]:
from detectron2.engine import DefaultTrainer

cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(“COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml”))
cfg.DATASETS.TRAIN = (“fissures_train”,)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(“COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml”) # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 250 # you will need to train longer for a practical dataset

cfg.SOLVER.MAX_ITER = 200000 # you will need to train longer for a practical dataset

cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # default: 512
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2 # has two classes(fissure, water).

cfg.OUTPUT_DIR = ‘/content/drive/My Drive/tunnel_fissure/output’
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)

trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)

trainer.resume_or_load(resume=True)

trainer.train()

print(‘train done.’)

Look at training curves in tensorboard:

%load_ext tensorboard
%tensorboard --logdir ‘/content/drive/My Drive/tunnel_fissure/output’
[11/04 07:16:42 d2.engine.defaults]: Model:
GeneralizedRCNN(
(backbone): FPN(
(fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(top_block): LastLevelMaxPool()
(bottom_up): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
)
)
(proposal_generator): RPN(
(rpn_head): StandardRPNHead(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
(anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
)
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
)
(roi_heads): StandardROIHeads(
(box_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
(1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
(2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
(3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
)
)
(box_head): FastRCNNConvFCHead(
(fc1): Linear(in_features=12544, out_features=1024, bias=True)
(fc2): Linear(in_features=1024, out_features=1024, bias=True)
)
(box_predictor): FastRCNNOutputLayers(
(cls_score): Linear(in_features=1024, out_features=3, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=8, bias=True)
)
(mask_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(14, 14), spatial_scale=0.25, sampling_ratio=0, aligned=True)
(1): ROIAlign(output_size=(14, 14), spatial_scale=0.125, sampling_ratio=0, aligned=True)
(2): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
(3): ROIAlign(output_size=(14, 14), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
)
)
(mask_head): MaskRCNNConvUpsampleHead(
(mask_fcn1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_fcn2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_fcn3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_fcn4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(deconv): ConvTranspose2d(256, 256, kernel_size=(2, 2), stride=(2, 2))
(predictor): Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1))
)
)
)
[11/04 07:16:44 d2.data.build]: Removed 0 images with no usable annotations. 16 images left.
[11/04 07:16:44 d2.data.build]: Distribution of instances among all 2 categories:

category #instances category #instances
fissure 19 water 9
total 28

[11/04 07:16:44 d2.data.common]: Serializing 16 elements to byte tensors and concatenating them all …
[11/04 07:16:44 d2.data.common]: Serialized dataset takes 0.03 MiB
[11/04 07:16:44 d2.data.dataset_mapper]: Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style=‘choice’), RandomFlip()]
[11/04 07:16:44 d2.data.build]: Using training sampler TrainingSampler
Skip loading parameter ‘roi_heads.box_predictor.cls_score.weight’ to the model due to incompatible shapes: (81, 1024) in the checkpoint but (3, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter ‘roi_heads.box_predictor.cls_score.bias’ to the model due to incompatible shapes: (81,) in the checkpoint but (3,) in the model! You might want to double check if this is expected.
Skip loading parameter ‘roi_heads.box_predictor.bbox_pred.weight’ to the model due to incompatible shapes: (320, 1024) in the checkpoint but (8, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter ‘roi_heads.box_predictor.bbox_pred.bias’ to the model due to incompatible shapes: (320,) in the checkpoint but (8,) in the model! You might want to double check if this is expected.
Skip loading parameter ‘roi_heads.mask_head.predictor.weight’ to the model due to incompatible shapes: (80, 256, 1, 1) in the checkpoint but (2, 256, 1, 1) in the model! You might want to double check if this is expected.
Skip loading parameter ‘roi_heads.mask_head.predictor.bias’ to the model due to incompatible shapes: (80,) in the checkpoint but (2,) in the model! You might want to double check if this is expected.
[11/04 07:16:46 d2.engine.train_loop]: Starting training from iteration 0
/usr/local/lib/python3.6/dist-packages/detectron2/structures/masks.py:331: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
item = item.nonzero().squeeze(1).cpu().numpy().tolist()
/usr/local/lib/python3.6/dist-packages/detectron2/structures/masks.py:331: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(
, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
item = item.nonzero().squeeze(1).cpu().numpy().tolist()
/usr/local/lib/python3.6/dist-packages/detectron2/layers/wrappers.py:226: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
return x.nonzero().unbind(1)
[11/04 07:16:59 d2.utils.events]: eta: 0:02:20 iter: 19 total_loss: 2.715 loss_cls: 1.075 loss_box_reg: 0.000 loss_mask: 0.694 loss_rpn_cls: 0.760 loss_rpn_loc: 0.213 time: 0.5992 data_time: 0.3178 lr: 0.000005 max_mem: 2178M
[11/04 07:17:11 d2.utils.events]: eta: 0:02:07 iter: 39 total_loss: 2.434 loss_cls: 0.872 loss_box_reg: 0.005 loss_mask: 0.691 loss_rpn_cls: 0.615 loss_rpn_loc: 0.165 time: 0.6023 data_time: 0.2886 lr: 0.000010 max_mem: 2178M
[11/04 07:17:23 d2.utils.events]: eta: 0:01:53 iter: 59 total_loss: 1.813 loss_cls: 0.629 loss_box_reg: 0.013 loss_mask: 0.686 loss_rpn_cls: 0.322 loss_rpn_loc: 0.202 time: 0.5992 data_time: 0.2889 lr: 0.000015 max_mem: 2178M
[11/04 07:17:35 d2.utils.events]: eta: 0:01:41 iter: 79 total_loss: 1.481 loss_cls: 0.440 loss_box_reg: 0.015 loss_mask: 0.678 loss_rpn_cls: 0.196 loss_rpn_loc: 0.163 time: 0.5985 data_time: 0.2884 lr: 0.000020 max_mem: 2178M
[11/04 07:17:47 d2.utils.events]: eta: 0:01:29 iter: 99 total_loss: 1.221 loss_cls: 0.243 loss_box_reg: 0.025 loss_mask: 0.666 loss_rpn_cls: 0.115 loss_rpn_loc: 0.158 time: 0.5960 data_time: 0.2551 lr: 0.000025 max_mem: 2178M
[11/04 07:17:59 d2.utils.events]: eta: 0:01:18 iter: 119 total_loss: 1.095 loss_cls: 0.150 loss_box_reg: 0.031 loss_mask: 0.654 loss_rpn_cls: 0.093 loss_rpn_loc: 0.135 time: 0.5986 data_time: 0.2853 lr: 0.000030 max_mem: 2178M
[11/04 07:18:11 d2.utils.events]: eta: 0:01:05 iter: 139 total_loss: 0.995 loss_cls: 0.102 loss_box_reg: 0.042 loss_mask: 0.638 loss_rpn_cls: 0.077 loss_rpn_loc: 0.133 time: 0.5998 data_time: 0.2886 lr: 0.000035 max_mem: 2178M
[11/04 07:18:23 d2.utils.events]: eta: 0:00:54 iter: 159 total_loss: 0.977 loss_cls: 0.093 loss_box_reg: 0.044 loss_mask: 0.616 loss_rpn_cls: 0.075 loss_rpn_loc: 0.096 time: 0.5997 data_time: 0.2784 lr: 0.000040 max_mem: 2178M
[11/04 07:18:35 d2.utils.events]: eta: 0:00:42 iter: 179 total_loss: 0.934 loss_cls: 0.090 loss_box_reg: 0.051 loss_mask: 0.606 loss_rpn_cls: 0.067 loss_rpn_loc: 0.120 time: 0.5998 data_time: 0.2779 lr: 0.000045 max_mem: 2178M
[11/04 07:18:47 d2.utils.events]: eta: 0:00:30 iter: 199 total_loss: 0.930 loss_cls: 0.097 loss_box_reg: 0.062 loss_mask: 0.586 loss_rpn_cls: 0.050 loss_rpn_loc: 0.142 time: 0.5995 data_time: 0.2744 lr: 0.000050 max_mem: 2178M
[11/04 07:18:59 d2.utils.events]: eta: 0:00:18 iter: 219 total_loss: 0.880 loss_cls: 0.091 loss_box_reg: 0.063 loss_mask: 0.552 loss_rpn_cls: 0.047 loss_rpn_loc: 0.091 time: 0.5987 data_time: 0.2564 lr: 0.000055 max_mem: 2178M
[11/04 07:19:11 d2.utils.events]: eta: 0:00:06 iter: 239 total_loss: 0.858 loss_cls: 0.102 loss_box_reg: 0.063 loss_mask: 0.534 loss_rpn_cls: 0.053 loss_rpn_loc: 0.103 time: 0.5987 data_time: 0.2790 lr: 0.000060 max_mem: 2178M
[11/04 07:19:19 d2.utils.events]: eta: 0:00:00 iter: 249 total_loss: 0.856 loss_cls: 0.098 loss_box_reg: 0.065 loss_mask: 0.529 loss_rpn_cls: 0.045 loss_rpn_loc: 0.110 time: 0.5979 data_time: 0.2538 lr: 0.000062 max_mem: 2178M
[11/04 07:19:19 d2.engine.hooks]: Overall training speed: 247 iterations in 0:02:28 (0.6003 s / it)
[11/04 07:19:19 d2.engine.hooks]: Total training time: 0:02:30 (0:00:02 on hooks)
train done.
Reusing TensorBoard on port 6006 (pid 561), started 0:14:03 ago. (Use ‘!kill 561’ to kill it.)

9.predict images
检测图片

In [ ]:
from detectron2.utils.visualizer import ColorMode

cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(“COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml”))

cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(“COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml”) # Let training initialize from model zoo

cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)

cfg.OUTPUT_DIR = ‘/content/drive/My Drive/tunnel_fissure/output’

cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, “model_final.pth”) # path to the model we just trained

cfg.MODEL.WEIGHTS = os.path.join(’/content/drive/My Drive/backup/fissure/output/model_0124999.pth’)

cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.6 # set a custom testing threshold
predictor = DefaultPredictor(cfg)

test_image_folder = ‘/content/drive/My Drive/tunnel_fissure/images/test’
files = os.listdir(test_image_folder)

sort by file name

files.sort()

for file_name in files:
# filter jpg files
if file_name[-4:] == ‘.jpg’:
image_path = os.path.join(test_image_folder, file_name)

    # load the origin image
    im = cv2.imread(image_path)
    
    outputs = predictor(im)   # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
    v = Visualizer(im[:, :, ::-1],
                  metadata=fissures_metadata, 
                  scale=0.5,   # zoom out image
                  instance_mode=ColorMode.IMAGE_BW   # remove the colors of unsegmented pixels. This option is only available for segmentation models
    )
    out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
    image_obj = out.get_image()[:, :, ::-1]
    cv2.imwrite(os.path.join(cfg.OUTPUT_DIR, file_name), image_obj)
    cv2_imshow(image_obj)

print(‘predict done.’)

predict done.
thanks for watching
welcome to leave message, tell me which AI opensource project you want to see.

如果您想看到哪个开源AI项目被开箱,欢迎留言。

In [ ]:
exit()

你可能感兴趣的:(Unbox,AI,pytorch,神经网络,机器学习,深度学习)