华为ai开发套件_使用华为ml套件图像分割构建背景橡皮擦应用

华为ai开发套件

Image segmentation is a widely used term in image processing and computer vision world. It is the process of partitioning a digital image into multiple segments. The expected output from an image that we applied image segmentation on is labeling of each pixel into subgroups that we defined. By applying image segmentation, we get a more meaningful image, we get an image that each pixel of which is represented by categories such as human body, sky, plant, food etc.

图像分割是图像处理和计算机视觉世界中广泛使用的术语。 这是将数字图像划分为多个段的过程。 应用了图像分割的图像的预期输出是将每个像素标记为我们定义的子组。 通过应用图像分割,我们可以获得更有意义的图像,我们得到的图像的每个像素都由人体,天空,植物,食物等类别表示。

Image segmentation can be used in many different scenarios. It can be used in photography apps to change the background or apply some effect only on plants or human body etc. It can also be used to determine cancerous cells in a microscope image or get land usage information in a satellite image or to determine the amount of herbicides that needed to be sprayed in a field according to crop density.

图像分割可用于许多不同的场景。 它可以用于摄影应用程序中以更改背景或仅对植物或人体等施加某些效果。它还可以用于确定显微镜图像中的癌细胞或获取卫星图像中的土地使用信息或确定数量根据作物密度需要在田间喷洒的除草剂数量。

Huawei ML Kit’s Image Segmentation service segments same elements (such as human body, plant, and sky) from an image. The elements supported include human body, sky, plant, food, cat, dog, flower, water, sand, building, mountain, and others. By the way, Huawei ML Kit works on all Android phones with ARM architecture and as it is a device-side capability it is free.

华为ML Kit的图像分割服务可以从图像中分割出相同的元素(例如人体,植物和天空)。 支持的元素包括人体,天空,植物,食物,猫,狗,花,水,沙子,建筑物,山脉等。 顺便说一下,华为ML Kit可在所有具有ARM架构的Android手机上使用,并且由于它是设备侧功能,因此是免费的。

ML Kit Image Segmentation allows developers two types of segmentation: human body and multiclass segmentation. We can apply image segmentation on static images and video streams if we select human body type. But we can only apply segmentation for static images in multiclass segmentation.

ML Kit图像分割允许开发人员使用两种类型的分割:人体分割和多类分割。 如果选择人体类型,则可以对静态图像和视频流应用图像分割。 但是我们只能在多类分割中将分割应用于静态图像。

In multiclass image segmentation the return value of the process is the coordinate array of each element. For instance, if an image consists of human body, sky, plant, and cat, the return value is the coordinate array of the four elements. After this point our app can apply different effects on elements. For example, we can change the blue sky with a red one.

在多类图像分割中,过程的返回值是每个元素的坐标数组。 例如,如果图像由人体,天空,植物和猫组成,则返回值为四个元素的坐标数组。 此后,我们的应用程序可以对元素应用不同的效果。 例如,我们可以用红色改变蓝天。

The return values of human body segmentation include the coordinate array of the human body, human body image with a transparent background, and gray-scale image with a white human body and black background. Our app can further process the elements based on the return values, such as replacing the video background and cutting out the human body from an image.

人体分割的返回值包括人体的坐标数组,具有透明背景的人体图像以及具有白色人体和黑色背景的灰度图像。 我们的应用程序可以根据返回值进一步处理元素,例如替换视频背景并从图像中切出人体。

Today, we are going to build Background Eraser app in Kotlin, on Android Studio. We are going to use human body segmentation function of ML Kit. In this example we are going to learn how to integrate HMS ML Kit into our project first and then we will see how to apply segmentation on images simply.

今天,我们将在Kotlin的Android Studio上构建Background Eraser应用程序。 我们将使用ML Kit的人体分割功能。 在此示例中,我们将学习如何首先将HMS ML Kit集成到我们的项目中,然后我们将看到如何简单地将分割应用于图像。

In the application we will have three imageViews. One for background image, one for selected image (supposed to contain human body) and one for processed image. We will distract human body/ bodies in the selected image and change its background with our selected background. It will be really simple, just to see how to apply this function in an application. I won’t bother you much with details. After we develop this simple application, you can go ahead and add various capabilities to your app.

在应用程序中,我们将有三个imageViews。 一种用于背景图像,一种用于所选图像(假定包含人体),另一种用于处理后的图像。 我们将分散所选图像中的人体,并用所选背景更改其背景。 仅看如何在应用程序中应用此功能,这将非常简单。 我不会为您打扰细节。 在开发了这个简单的应用程序之后,您可以继续向您的应用程序添加各种功能。

Let’s start building our demo application step by step from scratch!

让我们从头开始逐步构建演示应用程序!

  1. Firstly, let’s create our project on Android Studio. I named my project as Background Eraser. It’s totally up to you. We can create a project by selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect.

    首先,让我们在Android Studio上创建我们的项目。 我将项目命名为“背景橡皮擦”。 这完全取决于您。 我们可以创建通过选择空活动选择一个项目,然后按照在此描述的步骤后创建并签署我们的应用程序库连接的项目。

2. Secondly, In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.

2.其次,在HUAWEI Developer AppGallery Connect中 ,转到“ 开发” >“ 管理API” 。 确保ML套件已激活。

3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer / HMS Core/ AI / ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android / Getting Started / Integrating HMS Core SDK / Adding Build Dependencies / Integrating the Image Segmentation SDK. We can follow the guide here to add image segmentation capability to our project. As we are not going to use multiclass segmentation in this project we only add base SDK and human body segmentation package. We have also one meta-data tag to be added into our AndroidManifest.xml file. After the integration your app-level build.gradle file will look like this.

3.现在,我们已将华为移动服务(HMS)集成到我们的项目中。 现在,让我们遵循developer.huawei.com上的文档,找到要添加到我们项目中的软件包。 在网站上,单击开发人员/ HMS Core / AI / ML套件。 在这里,您将找到有关服务,参考,要下载的SDK以及其他内容的介绍性信息。 在ML Kit选项卡下,遵循Android /入门/集成HMS Core SDK /添加构建依赖项/集成Image Segmentation SDK。 我们可以按照此处的指南为我们的项目添加图像分割功能。 由于我们不会在该项目中使用多类细分,因此我们仅添加基本的SDK和人体细分包。 我们还有一个元数据标签要添加到我们的AndroidManifest.xml文件中。 集成后,您的应用程序级别的build.gradle文件将如下所示。

apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin:'com.huawei.agconnect'


android {
    compileSdkVersion 30
    buildToolsVersion "30.0.1"


    defaultConfig {
        applicationId "com.demo.backgrounderaser"
        minSdkVersion 23
        targetSdkVersion 30
        versionCode 1
        versionName "1.0"


        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
    }


    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
        }
    }
}


dependencies {
    implementation fileTree(dir: "libs", include: ["*.jar"])
    implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
    implementation 'androidx.core:core-ktx:1.3.1'
    implementation 'androidx.appcompat:appcompat:1.2.0'
    implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'androidx.test.ext:junit:1.1.1'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'


    //AGC Core
    implementation'com.huawei.agconnect:agconnect-core:1.3.1.300'


    //Image Segmentation Base SDK
    implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.2.300'


    //Image Segmentation Human Body Model
    implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.2.300'
}

And your project-level build.gradle file will look like this.

您的项目级build.gradle文件将如下所示。

buildscript {
    ext.kotlin_version = "1.4.0"
    repositories {
        google()
        jcenter()
        maven {url 'http://developer.huawei.com/repo/'}
    }
    dependencies {
        classpath "com.android.tools.build:gradle:4.0.1"
        classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
        classpath 'com.huawei.agconnect:agcp:1.3.1.300'
    }
}


allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'http://developer.huawei.com/repo/'}
    }
}


task clean(type: Delete) {
    delete rootProject.buildDir
}

Don’t forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.

不要忘记在AndroidManifest.xml中添加以下元数据标签。 这是用于机器学习模型的自动更新。




    


        


    

4. We can select applying human body segmentation on static images or video streams. In this project we are going to see how to do this on static images. To apply segmentation on images firstly we need to create our analyzer. setExact() method is to determine if we apply fine segmentation or not. I chose true here. As analyzer type we chose BODY_SEG here. The second option being IMAGE_SEG for multiclass segmentation. setScene() method is for determining result type. Here we chose FOREGROUND_ONLY, meaning a human body image with a transparent background and an original image for segmentation will be returned. Here is how to create it.

4.我们可以选择将人体分割应用于静态图像或视频流。 在这个项目中,我们将看到如何在静态图像上执行此操作。 首先将分割应用于图像,我们需要创建分析仪。 setExact()方法用于确定是否应用精细细分。 我在这里选择了真。 作为分析器类型,我们在此处选择BODY_SEG。 第二个选项是IMAGE_SEG,用于多类别细分。 setScene()方法用于确定结果类型。 在这里,我们选择FOREGROUND_ONLY,这意味着将返回具有透明背景的人体图像和用于分割的原始图像。 这是如何创建它。

private lateinit var mAnalyzer: MLImageSegmentationAnalyzer


override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_main)
    
    createAnalyzer()
}


private fun createAnalyzer(): MLImageSegmentationAnalyzer {
    val analyzerSetting = MLImageSegmentationSetting.Factory()
        .setExact(true)
        .setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
        .setScene(MLImageSegmentationScene.FOREGROUND_ONLY)
        .create()


    return MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(analyzerSetting).also {
        mAnalyzer = it
    }
}

5. Create a simple layout. It should contain three imageViews and two buttons. One imageView for background image, one imageView for image with human body to apply segmentation, one imageView for displaying processed image. One button for selecting background image and the other button for selecting image to be processed. Here is my example, you can reach the same result with a different design, too.

5.创建一个简单的布局。 它应该包含三个imageViews和两个按钮。 一个imageView用于背景图像,一个imageView用于带有人体的图像以应用分割,一个imageView用于显示已处理的图像。 一个按钮用于选择背景图像,另一个按钮用于选择要处理的图像。 这是我的示例,使用不同的设计也可以达到相同的结果。





    


    


    


    


    


    


    

6. In order to get our background image and source image we can use intents. OnClickListeners of our buttons will help us get images via intents from device storage as shown below. In onActivityResult we will appoint bitmaps we get to corresponding variables: mBackgroundFill and mSelectedBitmap. Here WRITE_EXTERNAL_STORAGE permission is requested to automatically update Huawei ML Kit’s image segmentation model.

6.为了获得我们的背景图像和源图像,我们可以使用意图。 按钮的OnClickListeners将帮助我们通过意图从设备存储中获取图像,如下所示。 在onActivityResult中,我们将指定到达对应变量的位图:mBackgroundFill和mSelectedBitmap。 在此请求WRITE_EXTERNAL_STORAGE权限以自动更新华为ML Kit的图像分割模型。

class MainActivity : AppCompatActivity() {


    companion object {
        private const val IMAGE_REQUEST_CODE = 58
        private const val BACKGROUND_REQUEST_CODE = 32
    }


    private var mBackgroundFill: Bitmap? = null
    private var mSelectedBitmap: Bitmap? = null


    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)


        if(ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED)
            init()
        else
            ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.WRITE_EXTERNAL_STORAGE), 0)
    }


    override fun onRequestPermissionsResult(requestCode: Int, permissions: Array, grantResults: IntArray) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults)


        if (requestCode == 0 && grantResults.isNotEmpty() && grantResults[0] == PackageManager.PERMISSION_GRANTED)
            init()
    }


    private fun init() {
        initView()
        createAnalyzer()
    }


    private fun initView() {
        buttonPickImage.setOnClickListener { getImage(IMAGE_REQUEST_CODE) }
        buttonSelectBackground.setOnClickListener { getImage(BACKGROUND_REQUEST_CODE) }
    }
    
    private fun getImage(requestCode: Int) {
        Intent(Intent.ACTION_GET_CONTENT).also {
            it.type = "image/*"
            startActivityForResult(it, requestCode)
        }
    }


    override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
        super.onActivityResult(requestCode, resultCode, data)


        if (resultCode == Activity.RESULT_OK) {
            data?.data?.also {


                when(requestCode) {
                    IMAGE_REQUEST_CODE -> {
                        mSelectedBitmap = MediaStore.Images.Media.getBitmap(contentResolver, it)


                        if (mSelectedBitmap != null) {
                            ivSelectedBitmap.setImageBitmap(mSelectedBitmap)
                            analyse(mSelectedBitmap!!)
                        }
                    }
                    BACKGROUND_REQUEST_CODE -> {
                        mBackgroundFill = MediaStore.Images.Media.getBitmap(contentResolver, it)


                        if (mBackgroundFill != null) {
                            ivBackgroundFill.setImageBitmap(mBackgroundFill)
                            mSelectedBitmap?.let { analyse(it) }
                        }
                    }
                }




            }
        }


    }
}

7. Now let’s create our method for analyzing images. It is very simple. It takes a bitmap as parameter, creates an MLFrame object out of it and asynchronously analyzes this MLFrame. It has OnSuccess and OnFailure callbacks. We will try to add the selected background if the analyze result is successful. Please remember that we chose our analyzer’s result type as FOREGROUND_ONLY. So, expect it to return us the original image together with human body image with transparent background.

7.现在,让我们创建分析图像的方法。 这很简单。 它以位图为参数,从中创建一个MLFrame对象,并异步分析此MLFrame。 它具有OnSuccess和OnFailure回调。 如果分析结果成功,我们将尝试添加选定的背景。 请记住,我们选择分析器的结果类型为FOREGROUND_ONLY。 因此,希望它能将原始图像与具有透明背景的人体图像一起返回给我们。

private fun analyse(bitmap: Bitmap) {
    val mlFrame = MLFrame.fromBitmap(bitmap)
    mAnalyzer.asyncAnalyseFrame(mlFrame)
        .addOnSuccessListener {
            addSelectedBackground(it)
        }
        .addOnFailureListener {
           Log.e(TAG, "analyse -> asyncAnalyseFrame: ", it)
        }
}

8. To add human body part onto our background image, we should make sure we have a background image first, then we should obtain a mutable bitmap out of our background to work upon, then we should resize this mutable bitmap according to our selected bitmap to make our image more realistic, then we should create our canvas from our mutable bitmap, then we draw human body part on our canvas and lastly we can use our processed image. Here is our addSelectedBackground() method.

8.要将人体部位添加到我们的背景图像上,我们应该确保首先有一个背景图像,然后我们应该从背景中获取一个可变的位图以进行处理,然后根据所选位图调整此可变位图的大小。为了使图像更逼真,我们应该从可变的位图创建画布,然后在画布上绘制人体部位,最后可以使用经过处理的图像。 这是我们的addSelectedBackground()方法。

private fun addSelectedBackground(mlImageSegmentation: MLImageSegmentation) {
    if (mBackgroundFill == null) {
        Toast.makeText(applicationContext, "Please select a background image!", Toast.LENGTH_SHORT).show()
    } else {
        var mutableBitmap = if (mBackgroundFill!!.isMutable) {
            mBackgroundFill
        } else {
            mBackgroundFill!!.copy(Bitmap.Config.ARGB_8888, true)
        }


        if (mutableBitmap != null) {


           /*
            *  If background image size is different than our selected image,
            *  we change our background image's size according to selected image.
            */
            if (mutableBitmap.width != mlImageSegmentation.original.width ||
                    mutableBitmap.height != mlImageSegmentation.original.height) {
                mutableBitmap = Bitmap.createScaledBitmap(
                    mutableBitmap,
                    mlImageSegmentation.original.width,
                    mlImageSegmentation.original.height,
                    false)
            }


            val canvas = mutableBitmap?.let { Canvas(it) }
            canvas?.drawBitmap(mlImageSegmentation.foreground, 0F, 0F, null)
            mProcessedBitmap = mutableBitmap
            ivProcessedBitmap.setImageBitmap(mProcessedBitmap)
        }
    }
}

9. Here is the whole of MainActivity. You can find this project on Github, too.

9.这是MainActivity的全部。 您也可以在Github上找到该项目。

package com.demo.backgrounderaser


import android.Manifest
import android.app.Activity
import android.content.Intent
import android.content.pm.PackageManager
import android.graphics.Bitmap
import android.graphics.BitmapFactory
import android.graphics.Canvas
import android.os.Bundle
import android.provider.MediaStore
import android.util.Log
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import androidx.core.content.ContextCompat
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.MLFrame
import com.huawei.hms.mlsdk.imgseg.MLImageSegmentation
import com.huawei.hms.mlsdk.imgseg.MLImageSegmentationAnalyzer
import com.huawei.hms.mlsdk.imgseg.MLImageSegmentationScene
import com.huawei.hms.mlsdk.imgseg.MLImageSegmentationSetting
import kotlinx.android.synthetic.main.activity_main.*




class MainActivity : AppCompatActivity() {


    companion object {
        private const val TAG = "BE_MainActivity"
        private const val IMAGE_REQUEST_CODE = 58
        private const val BACKGROUND_REQUEST_CODE = 32
    }


    private lateinit var mAnalyzer: MLImageSegmentationAnalyzer


    private var mBackgroundFill: Bitmap? = null
    private var mSelectedBitmap: Bitmap? = null
    private var mProcessedBitmap: Bitmap? = null


    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)


        if(ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED)
            init()
        else
            ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.WRITE_EXTERNAL_STORAGE), 0)
    }


    override fun onRequestPermissionsResult(requestCode: Int, permissions: Array, grantResults: IntArray) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults)


        if (requestCode == 0 && grantResults.isNotEmpty() && grantResults[0] == PackageManager.PERMISSION_GRANTED)
            init()
    }


    private fun init() {
        initView()
        createAnalyzer()
    }


    private fun initView() {
        buttonPickImage.setOnClickListener { getImage(IMAGE_REQUEST_CODE) }
        buttonSelectBackground.setOnClickListener { getImage(BACKGROUND_REQUEST_CODE) }
    }


    private fun createAnalyzer(): MLImageSegmentationAnalyzer {
        val analyzerSetting = MLImageSegmentationSetting.Factory()
            .setExact(true)
            .setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
            .setScene(MLImageSegmentationScene.FOREGROUND_ONLY)
            .create()


        return MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(analyzerSetting).also {
            mAnalyzer = it
        }
    }


    private fun analyse(bitmap: Bitmap) {
        val mlFrame = MLFrame.fromBitmap(bitmap)
        mAnalyzer.asyncAnalyseFrame(mlFrame)
            .addOnSuccessListener {
                addSelectedBackground(it)
            }
            .addOnFailureListener {
                Log.e(TAG, "analyse -> asyncAnalyseFrame: ", it)
            }
    }


    private fun addSelectedBackground(mlImageSegmentation: MLImageSegmentation) {
        if (mBackgroundFill == null) {
            Toast.makeText(applicationContext, "Please select a background image!", Toast.LENGTH_SHORT).show()
        } else {


            var mutableBitmap = if (mBackgroundFill!!.isMutable) {
                mBackgroundFill
            } else {
                mBackgroundFill!!.copy(Bitmap.Config.ARGB_8888, true)
            }


            if (mutableBitmap != null) {


                /*
                 *  If background image size is different than our selected image,
                 *  we change our background image's size according to selected image.
                 */
                if (mutableBitmap.width != mlImageSegmentation.original.width ||
                        mutableBitmap.height != mlImageSegmentation.original.height) {
                    mutableBitmap = Bitmap.createScaledBitmap(
                        mutableBitmap,
                        mlImageSegmentation.original.width,
                        mlImageSegmentation.original.height,
                        false)
                }


                val canvas = mutableBitmap?.let { Canvas(it) }
                canvas?.drawBitmap(mlImageSegmentation.foreground, 0F, 0F, null)
                mProcessedBitmap = mutableBitmap
                ivProcessedBitmap.setImageBitmap(mProcessedBitmap)
            }
        }
    }


    private fun getImage(requestCode: Int) {
        Intent(Intent.ACTION_GET_CONTENT).also {
            it.type = "image/*"
            startActivityForResult(it, requestCode)
        }
    }


    override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
        super.onActivityResult(requestCode, resultCode, data)


        if (resultCode == Activity.RESULT_OK) {
            data?.data?.also {


                when(requestCode) {
                    IMAGE_REQUEST_CODE -> {
                        mSelectedBitmap = MediaStore.Images.Media.getBitmap(contentResolver, it)


                        if (mSelectedBitmap != null) {
                            ivSelectedBitmap.setImageBitmap(mSelectedBitmap)
                            analyse(mSelectedBitmap!!)
                        }
                    }
                    BACKGROUND_REQUEST_CODE -> {
                        mBackgroundFill = MediaStore.Images.Media.getBitmap(contentResolver, it)


                        if (mBackgroundFill != null) {
                            ivBackgroundFill.setImageBitmap(mBackgroundFill)
                            mSelectedBitmap?.let { analyse(it) }
                        }
                    }
                }




            }
        }


    }
}

10. Well done! We have finished all the steps and created our project. Now we can test it. Here are some examples.

10.干得好! 我们已经完成了所有步骤并创建了我们的项目。 现在我们可以对其进行测试。 这里有些例子。

11. We have created a simple Background Eraser app. We can get human body related pixels out of our image and apply different backgrounds to it. This project is to show you the basics. You can go further, create much better projects and come up with various ideas to apply image segmentation to. I hope you enjoyed this article and created your project easily. If you have any questions please ask through the link below.

11.我们创建了一个简单的背景橡皮擦应用程序。 我们可以从图像中获取与人体相关的像素,并对其应用不同的背景。 该项目旨在向您展示基础知识。 您可以走得更远,创建更好的项目,并提出各种想法来应用图像分割。 希望您喜欢本文并轻松创建您的项目。 如有任何疑问,请通过下面的链接提问。

Happy coding!

编码愉快!

翻译自: https://medium.com/huawei-developers/build-a-background-eraser-app-with-huawei-ml-kit-image-segmentation-52208a7471ee

华为ai开发套件

你可能感兴趣的:(python,人工智能,opencv,计算机视觉)