决策树人工智能预测模型
Creating a model is just one step in the journey to real-world AI. Deploying means turning your model — whether it’s in a Jupyter Notebook or a .py file somewhere — into a useable service.
创建模型只是迈向真实世界AI的一步。 部署意味着将您的模型(无论是在Jupyter Notebook还是某个地方的.py文件中)转变为可用的服务。
Simplified AI pipeline. Created by author. 简化的AI管道。 由作者创建。I won’t discuss containers in-depth, which are separate from deploying and serving models. Container systems, like Docker and Kubernetes (which hosts Docker containers), are like boxes for your models that create reproducible, scalable, and isolated environments where you can set up dependencies so the model works in any execution environment.
我不会深入讨论容器 ,这些容器与部署和提供模型是分开的。 容器系统,例如Docker和Kubernetes(托管Docker容器),就像模型的盒子一样,它们创建可重现,可伸缩和隔离的环境,您可以在其中设置依赖项,从而使模型可以在任何执行环境中使用。
Here are 10 ways to deploy, host, and serve AI models to make predictions.
这里有10种部署,托管和服务AI模型进行预测的方法。
1. Flask的内置服务器 (1. Flask’s Built-in Server)
To preface, Flask’s built-in server isn’t production-grade, as it doesn’t scale. Nonetheless, due to its ease of use, it’s a handy tool for rapid experimentation and quickly getting your model up and running.
首先,Flask的内置服务器不是生产级的,因为它无法扩展。 尽管如此,由于其易用性,它还是一种方便的工具,可用于快速实验并快速启动和运行模型。
To deploy on Flask, follow these general steps:
要在Flask上进行部署,请遵循以下常规步骤:
Install Flask
安装烧瓶
Serialize your AI model (e.g. with Pickle or joblib)
序列化您的AI模型(例如,使用Pickle或joblib )
- Create another .py file in the same directory as the above serialized model, which will be used to make the web service in Flask 在与上述序列化模型相同的目录中创建另一个.py文件,该文件将用于在Flask中制作Web服务
- Run the .py file from the terminal 从终端运行.py文件
- Check your localhost address to see if it’s working 检查您的本地主机地址是否正常
- Make an http POST call with input data to receive a prediction 使用输入数据进行http POST调用以接收预测
This Towards Data Science piece shows an example of deploying a Sklearn model with a Flask API.
迈向数据科学的这一部分展示了一个使用Flask API部署Sklearn模型的示例。
2.云端瓶 (2. Flask in the Cloud)
Running Flask in the cloud will let you achieve greater scalability, though this is more involved than using the built-in server.
在云中运行Flask将使您获得更大的可伸缩性,尽管这比使用内置服务器要复杂得多。
Here are a few cloud servers you can use:
以下是您可以使用的一些云服务器:
Deploying Flask on Heroku
在Heroku上部署Flask
Deploying Flask on Google App Engine
在Google App Engine上部署Flask
Deploying Flask on AWS Elastic Beanstalk
在AWS Elastic Beanstalk上部署Flask
Deploying on Azure (IIS)
在Azure(IIS)上部署
Deploying on PythonAnywhere
在PythonAnywhere上部署
This FreeCodeCamp article does a great job of going into detail about deploying your model with Flask in the cloud.
这篇FreeCodeCamp文章做了很多出色的工作,详细介绍了如何在Flask中将模型部署到云中。
3. AutoML解决方案 (3. AutoML Solutions)
AutoML has been exploding in popularity. Though not the same concept as AI deployment, most AutoML solutions allow easy model deployment, on-top of their core AutoML functionality.
AutoML一直在爆炸式增长。 尽管与AI部署的概念不同,但大多数AutoML解决方案都可以在其核心AutoML功能的基础上轻松进行模型部署。
Overlap of AI deployment solutions and AutoML solutions. Created by author. AI部署解决方案和AutoML解决方案的重叠。 由作者创建。Here are some AutoML solutions that enable easy (even one-click) model deployment:
以下是一些AutoML解决方案,可实现简单(甚至一键式)模型部署:
Cloud AutoML (this lets you generate predictions with the provided REST API)
Cloud AutoML (这使您可以使用提供的REST API生成预测)
Azure AutoML (this produces a
.pkl
file that contains the model which you can deploy in Azure)Azure AutoML (这将生成一个
.pkl
文件,其中包含可以在Azure中部署的模型)Apteo (this is a no-code solution that lets you generate predictions in your browser or with an API)
Apteo (这是一种无代码解决方案,可让您在浏览器中或使用API生成预测)
These solutions offer far more than just AI deployment, and can drastically increase the efficiency of a data science team.
这些解决方案不仅提供AI部署,还可以大大提高数据科学团队的效率。
4. Azure机器学习 (4. Azure Machine Learning)
Azure ML focuses on providing enterprise-grade cloud services, though using Microsoft’s servers will obviously cost you.
Azure ML专注于提供企业级云服务,尽管使用Microsoft的服务器显然会花费您。
There are 5 main steps to deploy any model via Azure ML:
通过Azure ML部署任何模型有5个主要步骤 :
- Gather prerequisites (Azure ML workspace, Azure CLI, trained ML model in the workspace) 收集先决条件(Azure ML工作区,Azure CLI,工作区中训练有素的ML模型)
- Prepare for deployment by creating an inference configuration 通过创建推理配置准备部署
Create a Docker image using Model.package
使用Model.package创建Docker映像
- Deploy the image as a web app 将图像作为Web应用程序部署
- Use the web app by submitting data to the URL and displaying the response 通过将数据提交到URL并显示响应来使用Web应用
5. GCP (5. GCP)
There are three main steps to deploying on GCP:
在GCP上进行部署主要包括三个步骤:
- Upload your model to a Cloud Storage bucket. 将模型上传到Cloud Storage存储桶。
Create an AI Platform Prediction model resource.
创建一个AI平台预测模型资源 。
Create an AI Platform Prediction version resource, specifying the Cloud Storage path to your saved model.
创建一个AI Platform Prediction 版本资源 ,指定保存的模型的Cloud Storage路径。
Like Azure, GCP offers enterprise-grade scalability and security, though it requires extensive technical expertise to get running.
与Azure一样,GCP也提供企业级可伸缩性和安全性,尽管它需要广泛的技术知识才能运行。
6. AWS SageMaker (6. AWS SageMaker)
AWS SageMaker provides an HTTPS endpoint for your model, availing it to provide inferences in three steps:
AWS SageMaker为您的模型提供了HTTPS终端节点,可通过以下三个步骤来提供推断:
- Create the model in SageMaker, including the relevant S3 path and Docker registry path 在SageMaker中创建模型,包括相关的S3路径和Docker注册表路径
- Create an endpoint configuration for an HTTPS endpoint 为HTTPS端点创建端点配置
- Create an HTTPS endpoint 创建一个HTTPS端点
7. IBM Watson ML (7. IBM Watson ML)
While AWS, GCP, and Azure are the three giants when it comes to deploying AI in the cloud, IBM’s Watson ML offers a more niche solution, which allows you to dynamically retrain models and auto-generate APIs.
AWS,GCP和Azure是在云中部署AI的三大巨头,而IBM的Watson ML提供了一个更为细分的解决方案,可让您动态地重新训练模型并自动生成API。
IBM offers a great white paper into its services, which expands into ML servers and pipelines more broadly.
IBM在其服务中提供了一份出色的白皮书,该白皮书将更广泛地扩展到ML服务器和管道。
8. Oracle数据科学平台 (8. Oracle Data Science Platform)
Oracle’s Data Science Platform allows teams to build, manage, and deploy models with reproducibility, high security, and a comprehensive model-building environment.
Oracle的数据科学平台使团队可以构建,管理和部署具有可重现性,高安全性和全面的模型构建环境的模型。
9.阿里云 (9. Alibaba Cloud)
Alibaba Cloud offers several methods for deploying models via its Elastic Algorithm Service (EAS), including:
阿里云提供了多种通过其弹性算法服务(EAS)部署模型的方法,包括:
- Uploading models to the console 将模型上传到控制台
- Using the PAI Studio to deploy models 使用PAI Studio部署模型
- Using DSW to deploy models 使用DSW部署模型
- Using the EASCMD client to deploy models 使用EASCMD客户端部署模型
10.渲染 (10. Render)
Render is one of the easier-to-use tools on this list, as it deploys your models directly from GitHub or GitLab. All you need to do is push your code like you normally do.
渲染是此列表中易于使用的工具之一,因为它直接从GitHub或GitLab部署模型。 您需要做的就是像平常一样推送代码。
Fast.AI has a useful guide for deploying a model on Render, showing just how simple it is.
Fast.AI具有在Render上部署模型的有用指南,显示了它的简单性。
结论 (Conclusion)
There’s an ever-growing range of solutions to deploy and serve AI models.
部署和服务AI模型的解决方案范围不断扩大。
If your needs are simple, you might stick with Flask, while if you need enterprise-grade scalability (and have enterprise-level expertise and resources), you might go with AWS, Azure, or GCP.
如果您的需求很简单,则可以坚持使用Flask,而如果您需要企业级的可伸缩性(并且具有企业级的专业知识和资源),则可以使用AWS,Azure或GCP。
If you have more niche needs, you might choose Oracle, IBM, or Alibaba. If you want an end-to-end solution without a ton of hassle, you might choose an AutoML tool.
如果您有更多利基需求,则可以选择Oracle,IBM或Alibaba。 如果您想要一个端到端的解决方案而没有很多麻烦,则可以选择一个AutoML工具。
What do you think — would you add any deployment tools to this list?
您如何看待-您会在此列表中添加任何部署工具吗?
翻译自: https://towardsdatascience.com/10-ways-to-deploy-and-serve-ai-models-to-make-predictions-336527ef00b2
决策树人工智能预测模型