之前讲述过关于Jieba分词的内容,最近又有关于这方面的需求,于是做了一个小示例来学习。此示例先获取xlsx文件的语料内容,然后再针对语料进行分词。
回顾一下:
seg_list = jieba.cut(label, cut_all=True)
seg_list = jieba.cut(label, cut_all=False)
seg_list = jieba.cut_for_search(label)
seg_list = jieba.cut(label)
words = jieba.analyse.textrank(label, topK=50, withWeight=False, allowPOS=('ns', 'n', 'vn', 'v'))
rule = re.compile(u"[^a-zA-Z0-9\u4e00-\u9fa5]")
label = rule.sub('',label)
punctuation ="""!?。"#$%&'()*+-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘'‛“”„‟…‧﹏"""
re_punctuation ="[{}]+".format(punctuation)
label = re.sub(re_punctuation, "", label).strip()
workbook = xlrd.open_workbook(path)
sheet = workbook.sheet_by_index(page)
print("Sheet的名称:", sheet.name, ",行数:", sheet.nrows, ",列数:", sheet.ncols)
sheet.cell_value(rown, coln)
s = [x.strip() for x in item_arr]
https://github.com/ShaShiDiZhuanLan/Demo_Jieba_Python
先上传图片到项目中,然后提交完图片之后在项目中找到图片的url
makedown 写 README:
# 6、运行效果
