图像分割技术语义分割代码_用5行代码对150类对象进行语义分割

图像分割技术语义分割代码

It is now possible to perform segmentation on 150 classes of objects using ade20k model with PixelLib. Ade20k model is a deeplabv3+ model trained on ade20k dataset, a dataset with 150 classes of objects. Thanks to tensorflow deeplab’s model zoo, I extracted ade20k model from its tensorflow model checkpoint.

现在可以使用带有PixelLib的ade20k模型对150类对象进行分割。 Ade20k模型是在ade20k数据集上训练的deeplabv3 +模型,该数据集包含150类对象。 感谢tensorflow deeplab的模型动物园,我从其tensorflow模型检查点提取了ade20k模型。

Install the latest version tensorflow (tensorflow 2.0) with:

使用以下命令安装最新版本的tensorflow(tensorflow 2.0):

  • pip3 install tensorflow

    pip3安装tensorflow

Install Pixellib:

安装Pixellib:

  • pip3 install pixellib — upgrade

    pip3 install pixellib —升级

Implementation of Semantic Segmentation with PixelLib:

使用PixelLib实现语义分割:

The code to implement semantic segmentation with deeplabv3+ model is trained on ade20k dataset.

在ade20k数据集上训练了使用deeplabv3 +模型实现语义分割的代码。

import pixellib
from pixellib.semantic import semantic_segmentation


segment_image = semantic_segmentation()
segment_image.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segment_image.segmentAsAde20k("path_to_image", output_image_name= "path_to_output_image")

We shall observe each line of code:

我们将观察每一行代码:

import pixellib
from pixellib.semantic import semantic_segmentation segment_image = semantic_segmentation()

The class for performing semantic segmentation is imported from pixelLib and we created an instance of the class.

用于执行语义分割的类是从pixelLib导入的,我们创建了该类的实例。

segment_image.load_ade20k_model(“deeplabv3_xception65_ade20k.h5”)

In the code above we loaded the xception model trained on ade20k for segmenting objects. The model can be downloaded from here.

在上面的代码中,我们加载了在ade20k上训练的用于分割对象的Xception模型。 该模型可从此处下载。

segment_image.segmentAsAde20k(“path_to_image”, output_image_name = “path_to_output_image)

We loaded the function to perform segmentation on an image. The function takes two parameters…

我们加载了该功能以对图像进行分割。 该函数有两个参数...

  • path_to_image:- this is the path to the image to be segmented.

    path_to_image:-这是要分割的图像的路径。

  • output_image_name:- this is the path to save the segmented image. It will be saved in your current working directory.

    output_image_name:-这是保存分割图像的路径。 它将保存在您当前的工作目录中。

Sample.jpg

Sample.jpg

ikicommons(CC0)by ikicommons (CC0)由 Acabashi Acabashi

Note: It is possible to perform semantic segmentation of both indoor and outdoor scenes with PixelLib using Ade20k model.

注意:可以使用Ade20k模型通过PixelLib对室内和室外场景进行语义分割。

import pixellib
from pixellib.semantic import semantic_segmentation


segment_image = semantic_segmentation()
segment_image.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segment_image.segmentAsAde20k("sample.jpg", output_image_name="image_new.jpg")

Ouput image

输出图像

Semantic segmentation of an outdoor scene

户外场景的语义分割

The saved image after segmentation, the objects in the image are segmented. You can apply segmentation overlay on the image if you want to.

分割后保存的图像将被分割。 如果需要,可以在图像上应用分段叠加。

segment_image.segmentAsAde20k("sample.jpg", output_image_name = "image_new.jpg", overlay = True)

We added the extra parameter overlay and set it to true and we obtained an image with a segmentation overlay on the objects.

我们添加了额外的参数覆盖 ,并将其设置为true ,我们获得了在对象上具有分段覆盖的图像。

Sample2.jpg

Sample2.jpg

ikicommons.com(CCO)by Wikicommons.com (CCO), Karen Mardahl Karen Mardahl
segment_image.segmentAsAde20k(“sample2.jpg”, output_image_name = “image_new2.jpg")

Output image

输出图像

Semantic segmentation of an indoor scene

室内场景的语义分割

Specialised uses of PixelLib may require you to return the array of the segmentation’s output:

PixelLib的特殊用途可能需要您返回分段输出的数组:

Obtain the array of the segmentation’s output by using this code,

使用此代码获取细分的输出数组,

segmap, output = segment_image.segmentAsAde20k()

You can test the code for obtaining arrays and print out the shape of the output by modifying the semantic segmentation code below.

您可以通过修改下面的语义分段代码来测试用于获取数组的代码并打印输出的形状。

import pixellib
from pixellib.semantic import semantic_segmentation
import cv2


segment_image = semantic_segmentation()
segment_image.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segmap, output = segment_image.segmentAsAde20k("sample2.jpg")
cv2.imwrite("img.jpg", output)
print(output.shape)
  • Obtain both the segmap and the segmentation overlay’s arrays by using this code,

    使用此代码获取segmap和细分叠加层的数组,
segmap, seg_overlay = segment_image.segmentAsAde20k(overlay = True)
import pixellib
from pixellib.semantic import semantic_segmentation
import cv2


segment_image = semantic_segmentation()
segment_image.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segmap, seg_overlay = segment_image.segmentAsAde20k("sample2.jpg", overlay = True)
cv2.imwrite("img.jpg", seg_overlay)
print(seg_overlay.shape)

使用ADE20K模型进行视频分割 (VIDEO SEGMENTATION WITH ADE20K MODEL)

import pixellib
from pixellib.semantic import semantic_segmentation


segment_video = semantic_segmentation()
segment_video.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segment_video.process_video_ade20k("video_path", frames_per_second= 15, output_video_name="path_to_output_video")

We shall explain each line of code below.

我们将在下面解释每一行代码。

import pixellibfrom pixellib.semantic import semantic_segmentationsegment_video = semantic_segmentation()

We imported in the class for performing semantic segmentation and created an instance of the class.

我们导入了用于执行语义分割的类,并创建了该类的实例。

segment_video.load_ade20k_model("deeplabv3_xception65_ade20k.h5")

We loaded the xception model trained on ade20k dataset to perform semantic segmentation and it can be downloaded from here.

我们加载了在ade20k数据集上训练的Xception模型,以执行语义分割,可以从此处下载该模型。

segment_video.process_video_ade20k("video_path",  overlay = True, frames_per_second= 15, output_video_name="path_to_output_video")

We called the function to perform segmentation on the video file.

我们调用了对视频文件执行分段的功能。

It takes the following parameters:-

它采用以下参数:

  • video_path:this is the path to the video file we want to perform segmentation on.

    video_path :这是我们要对其进行分段的视频文件的路径。

  • frames_per_second: this is the parameter used to set the number of frames per second for the saved video file. In this case it is set to 15 i.e the saved video file will have 15 frames per second.

    frames_per_second:这是用于设置保存的视频文件每秒的帧数的参数。 在这种情况下,它设置为15,即,保存的视频文件每秒将具有15帧。

  • output_video_name: this is the name of the saved segmented video. The output video will be saved in your current working directory.

    output_video_name:这是保存的名称 分段视频 输出的视频将保存在您当前的工作目录中。

sample_video

sample_video

演示地址

import pixellib
from pixellib.semantic import semantic_segmentation


segment_video = semantic_segmentation()
segment_video.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segment_video.process_video_ade20k("sample_video.mp4", frames_per_second= 15, output_video_name="output_video.mp4")

Output Video

输出视频

演示地址

This is the saved segmented video using ade20k model.

这是使用ade20k模型保存的分段视频。

Semantic Segmentation of Live Camera.

实时摄像机的语义分割。

We can use the same model to perform semantic segmentation on camera. This can be done by including few modifications to the code that is used to process a video file.

我们可以使用相同的模型对相机执行语义分割。 这可以通过对用于处理视频文件的代码进行少量修改来完成。

import pixellib
from pixellib.semantic import semantic_segmentation
import cv2




capture = cv2.VideoCapture(0)


segment_video = semantic_segmentation()
segment_video.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segment_video.process_camera_ade20k(capture, overlay=True, frames_per_second= 15, output_video_name="output_video.mp4", show_frames= True,
frame_name= "frame", check_fps = True)
import cv2capture = cv2.VideoCapture(0)

We imported cv2 and included the code to capture camera’s frames.

我们导入了cv2,并包含了捕获相机帧的代码。

segment_video.process_camera_ade20k(capture,  overlay = True, frames_per_second= 15, output_video_name="output_video.mp4", show_frames= True,frame_name= "frame", check_fps = True)

In the code for performing segmentation, we replaced the video’s filepath to capture i.e we are processing a stream of frames captured by the camera instead of a video file.We added extra parameters for the purpose of showing the camera frames:

在执行分段的代码中,我们替换了视频的文件路径以进行捕获,即,我们正在处理由摄像机而不是视频文件捕获的帧流。我们添加了额外的参数以显示摄像机的帧:

  • show_frames: this parameter handles showing of segmented camera’s frames and press q to exist the display of frames.

    show_frames:此参数处理分段相机的帧的显示,然后按q以显示帧的显示。

  • frame_name:this is the name given to the shown camera’s frame.

    frame_name:这是给显示的相机框的名称。

check_fps:You may want to check the number of frames processed per second, just set the parameter check_fps to true.It will print out the number of frames per seconds. In this case it is 30 frames per second.

check_fps:您可能需要检查每秒处理的帧数,只需将参数check_fps设置为true ,它将打印出每秒的帧数。 在这种情况下, 每秒30帧

Awesome! 30 frames per second is great for Real Time Segmentation of camera’s feed.

太棒了! 每秒30帧非常适合对摄像机的Feed进行实时细分。

A demo showing the output of pixelLib’s semantic segmentation on camera’s feeds using ade20k model.

该演示演示了如何使用ade20k模型在相机的供稿上输出pixelLib的语义分段。

演示地址

Good work! It successfully segmented me.

干得好! 它成功地细分了我。

Visit the official GitHub repository of PixelLib.

访问PixelLib的官方GitHub存储库。

Visit the official documentation of PixelLib

访问PixelLib的官方文档

Reach to me via:

通过以下方式联系我:

Email: [email protected]

电子邮件: [email protected]

Twitter: @AyoolaOlafenwa

推特: @AyoolaOlafenwa

Facebook: Ayoola Olafenwa

脸书: Ayoola Olafenwa

Linkedin: Ayoola Olafenwa

Linkedin: Ayoola Olafenwa

If you enjoy this article you will love to read these other articles about PixelLib:

如果您喜欢这篇文章,您将喜欢阅读其他有关PixelLib的文章:

Image Segmentation With 5 Lines of code

用5行代码进行图像分割

Video Segmentation With 5 Lines of code

5行代码的视频分割

翻译自: https://towardsdatascience.com/semantic-segmentation-of-150-classes-of-objects-with-5-lines-of-code-7f244fa96b6c

图像分割技术语义分割代码

你可能感兴趣的:(java,python,计算机视觉,算法,人工智能)