python lime_本地可解释模型不可知的解释– LIME in Python

python lime

When working with classification and/or regression techniques, its always good to have the ability to ‘explain’ what your model is doing. Using Local Interpretable Model-agnostic Explanations (LIME), you now have the ability to quickly provide visual explanations of your model(s).

当使用分类和/或回归技术时,总是能够“解释”模型的作用总是很不错的。 使用本地可解释模型不可知的解释(LIME),您现在可以快速提供模型的视觉解释。

Its quite easy to throw numbers or content into an algorithm and get a result that looks good. We can test for accuracy and feel confident that the classifier and/or model is ‘good’…but can we describe what the model is actually doing to other users? A good data scientist spends some of their time making sure they have reasonable explanations for what the model is doing and why the results are what they are.

将数字或内容放入算法中并获得看起来不错的结果非常容易。 我们可以测试准确性,并对分类器和/或模型“良好”充满信心……但是我们可以描述该模型对其他用户的实际作用吗? 一位优秀的数据科学家会花费一些时间来确保

你可能感兴趣的:(python,机器学习,人工智能,深度学习,java)