什么!竟然有人想用鸿蒙做机器学习!
那又有何不可?最近一直在python进行RNN神经网络的训练,做一些情感分析,突发奇想,在鸿蒙上是否也有操作的空间?答案是肯定的,
本次我们将使用Stanford CoreNLP(斯坦福自然语言工具包)开发一个英文语句情感分析的简单应用。
Standford CoreNLP是斯坦福大学的自然语言处理工具包,提供了一套人类语言处理的工具,例如分词、词性标注、命名实体识别、句法树、依存句法分析等。Standford CoreNLP的源代码是利用JAVA编写的,也有封装了一整套的python工具包。
至此,准备工作都完成了,下面简单实现一个句子情感分析的功能
前文提到,StanfordCoreNLP是实现功能的入口,我们需要实例化一个Properties对象,确定需要的语言处理功能,同时传入一个annotation对象即可实现功能。
package com.example.nlpdemo.utils;
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.sentiment.SentimentCoreAnnotations;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.util.CoreMap;
import java.util.Properties;
public class NLP_EMOTION {
//必要: 功能入口
StanfordCoreNLP pipeline = null;
//无关要素 记分用的
public int score;
public void startengine()
{ //实例化一个对象
Properties props = new Properties();
this.score=0;
//设置所需要的功能,分词,情感分析等,annotators就是前文提到的工具类
props.setProperty("annotators", "tokenize, ssplit, parse, sentiment");
//实现接口
pipeline = new StanfordCoreNLP(props);
}
public int getScore(){
return score;
}
public String sentiment_emotion(String text)
{
int emotion;
this.score = 0;
String emotion_state;
String str="";
//传入我们需要分析的字符串
Annotation annotation = pipeline.process(text);
int i=0;
for(CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class))
{ //语法树
Tree tree = sentence.get(SentimentCoreAnnotations.SentimentAnnotatedTree.class);
//情感打分
emotion = RNNCoreAnnotations.getPredictedClass(tree);
i++;
score+=emotion;
//情感状态
emotion_state = sentence.get(SentimentCoreAnnotations.SentimentClass.class);
str +=emotion_state + ": " + sentence+ " "+emotion +"|";
}
score = score/i;
return str;
}
}
这里使用的一些具体函数的构成关系在这里不进行赘述,感兴趣的读者可以进入官网细细研究,未来有机会我也将进一步从零分析和搭建RNN网络。
在sentiment_emotion中我们进行了情感分析,通常可以获得以下的一些信息:
这里采用JS,JAVA混合开发模式,Ability类型的JS调用JAVA-PA,社区很多伙伴已经分享了相关的使用方法,这里不再赘述。
package com.example.nlpdemo;
import com.example.nlpdemo.utils.NLP_EMOTION;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.app.Context;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
import ohos.rpc.*;
import ohos.utils.zson.ZSONObject;
import java.util.HashMap;
import java.util.Map;
public class NLPServiceAbility extends Ability {
private static final String TAG = "NLP测试";
// 定义日志标签
private static final HiLogLabel LABEL = new HiLogLabel(3, 0xD000F00, TAG);
private Context context;
private MyRemote remote = new MyRemote();
private String str="";
private IRemoteObject remoteObjectHandler;
static NLP_EMOTION nlpPipeline = null;
private int has_new=0;
// FA在请求PA服务时会调用Ability.connectAbility连接PA,连接成功后,需要在onConnect返回一个remote对象,供FA向PA发送消息
@Override
protected IRemoteObject onConnect(Intent intent) {
super.onConnect(intent);
return remote.asObject();
}
public static String test(String s){
String text = s;
nlpPipeline = new NLP_EMOTION();
nlpPipeline.startengine();
String result = nlpPipeline.sentiment_emotion(text);
HiLog.info(LABEL,"yzj"+nlpPipeline.sentiment_emotion(text));
return result;
}
class MyRemote extends RemoteObject implements IRemoteBroker {
private static final int SUCCESS = 0;
private static final int ERROR = 1;
private static final int PLUS = 1001;
private static final int SUBSCRIBE=1005;
private static final int NLP =1010;
MyRemote() {
super("MyService_MyRemote");
}
@Override
public boolean onRemoteRequest(int code, MessageParcel data, MessageParcel reply, MessageOption option) {
switch (code) {
case SUBSCRIBE:{
// 如果仅支持单FA订阅,可直接覆盖:remoteObjectHandler = data.readRemoteObject();
remoteObjectHandler=data.readRemoteObject();
// startNotify();
Map<String, Object> result = new HashMap<String, Object>();
result.put("code", SUCCESS);
reply.writeString(ZSONObject.toZSONString(result));
break;
}
case PLUS: {
String dataStr = data.readString();
// 返回结果当前仅支持String,对于复杂结构可以序列化为ZSON字符串上报
Map<String, Object> result = new HashMap<String, Object>();
result.put("code", SUCCESS);
result.put("abilityResult", "111");
reply.writeString(ZSONObject.toZSONString(result));
break;
}
case NLP: {
str = data.readString();
// 返回结果当前仅支持String,对于复杂结构可以序列化为ZSON字符串上报
HiLog.info(LABEL,str);
Map<String, Object> result = new HashMap<String, Object>();
result.put("code", SUCCESS);
result.put("abilityResult", "NLP函数成功被调用");
result.put("emotion", test(str));
result.put("score",nlpPipeline.getScore());
str="";
reply.writeString(ZSONObject.toZSONString(result));
break;
}
default: {
Map<String, Object> result = new HashMap<String, Object>();
result.put("abilityError", ERROR);
reply.writeString(ZSONObject.toZSONString(result));
return false;
}
}
return true;
}
@Override
public IRemoteObject asObject() {
return this;
}
}
}
export default {
data: {
title: "",
str:"NONE",
inputfield:"nothing",
tips:"none",
score:"0",
},
onInit() {
this.title = "测测你现在的心情";
this.Subscribekv();
this.NLP();
},
//订阅PA
initAction: function (code) {
var actionData = {
};
var action = {};
action.bundleName = "com.yzj.card";
action.abilityName = "com.example.nlpdemo.NLPServiceAbility";
action.messageCode = code;
action.data = actionData;
action.abilityType = 0;
action.syncOption = 0;
return action;
},
Subscribekv:async function(){
try{
var action = this.initAction(1005);
var that = this;
var _data = {};
var result = await FeatureAbility.subscribeAbilityEvent(action,function (res) { //调用订阅服务API
console.info(" 订阅PA返回的结果是: " + res);
console.info("收到返回结果")
this.onShow();
});
console.info(" subscribeCommonEvent result = " + result);
}
catch (pluginError) {
console.error("subscribeCommonEvent error : result= " + JSON.stringify(pluginError));
}
},
NLP: async function(){
var actionData = {};
actionData=this.str;
var action = {};
action.bundleName = 'com.yzj.card';
action.abilityName = 'com.example.nlpdemo.NLPServiceAbility';
action.messageCode = 1010;
action.data = actionData;
action.abilityType = 0;
action.syncOption =0;
var result = await FeatureAbility.callAbility(action);
var ret = JSON.parse(result);
if (ret.code == 0) {
console.info('plus result is:' + JSON.stringify(ret.abilityResult));
console.info('NLP返回结果'+JSON.stringify(ret.emotion));
var ss = JSON.stringify(ret.emotion).replace("|","\n");
this.inputfield = ss;
console.info("平均emotion:"+JSON.stringify(ret.score));
let rank = parseInt(JSON.stringify(ret.score));
this.score = rank;
if(rank==1){
this.tips="今天或许有些糟糕?";
}
else if(rank==2){
this.tips = "平平淡淡才是真"
}
else if(rank>=3){
this.tips ="今天充满欢喜!"
}
} else {
console.error('plus error code:' + JSON.stringify(ret.code));
}
},
textfield(e){
this.str=e.value;
}
}
<div class="container">
<text class="title" style="font-size: 32px;">
{{ title }}
text>
<input id="infield" type="text" style="width:70%;height: 12%;font-size: 20px;margin-top: 30px;"@change="textfield"
>
请输入文本
input>
<button type="capsule" onclick="NLP" style="width: 150px;height: 60px;margin-top: 30px;">
测一测
button>
<text style="width: 312px;height: 200px;background-color:cornflowerblue;margin-top: 30px;border-radius: 25px;font-size: 20px;">
{{inputfield}}
text>
<text style="font-size:20px;width:80%;height:10%;background-color: aquamarine;margin-top: 30px;border-radius: 25px;">
{{tips}}
评分 {{score}}
text>
div>
关于机器学习内容还有非常多有意思的事情,这样的模式显然不是最佳的开发模式,5G大的工程文件(哈哈),最好能部署在云端,只能说实现一些功能,但非好用的功能,却也是一次尝试。在这个包下能够开发出很多有意思的功能,也支持中文等多种语言工具,还可以结合华为鸿蒙目前支持的AI功能,欢迎读者尝试和积极沟通。
或许,我们应该做一些更大胆的尝试?在HarmonyOS,OpenHarmony上从零搭建机器学习模型,再结合分布式能力,穷尽N多台设备的算力?也不知道手上的麒麟990能到何种程度。(嘻)