1、给出任意一个列表,请查找出x元素是否在列表里面,如果存在返回1,不存在返回0、
2、任一个英文的纯文本文件,统计其中的单词出现的个数。
s =""" Accessing Text from the Web and from Disk
Electronic Books
A small sample of texts from Project Gutenberg appears in the NLTK corpus collection. However, you may be interested in analyzing other texts from Project Gutenberg. You can browse the catalog of 25,000 free online books at http://www.gutenberg.org/catalog/, and obtain a URL to an ASCII text file. Although 90% of the texts in Project Gutenberg are in English, it includes material in over 50 other languages, including Catalan, Chinese, Dutch, Finnish, French, German, Italian, """
word=[]#存储单词
#因为原文中每个单词都是用空格 或者逗号加空格分开的,
line=s.replace(',','')#除去逗号只要空格来分开单词
line=s.strip()
wo=line.split(' ')
word.extend(wo)
#去除重复的值
wokey={}
#word作为键值,VAUE为空值
wokey=wokey.fromkeys(word)
word_1=list(wokey.keys())
#然后统计单词出现的次数,并将它存入一个字典中
for i in word_1:
wokey[i]=word.count(i)
print (wokey)
#排序,按values进行排序,如果是按key进行排序用sorted(wokey.items(),key=lambda d:d[0],reverse=True)
wokey_1={}
wokey_1=sorted(wokey.items(),key=lambda d:d[1],reverse=True)
wokey_1=dict(wokey_1)
for x,y in wokey_1.items():
print('the word is "','{}'.format(x),'"''"',' and its amount is "','{}'.format(y),'"')
这是阿里巴巴2016年的一道面试题:
统计英文文章中单词出现的次数,并且输出出现次数的前10个单词
文本如下:
Accessing Text from the Web and from Disk
Electronic Books
A small sample of texts from Project Gutenberg appears in the NLTK corpus collection. However, you may be interested in analyzing other texts from Project Gutenberg. You can browse the catalog of 25,000 free online books at http://www.gutenberg.org/catalog/, and obtain a URL to an ASCII text file. Although 90% of the texts in Project Gutenberg are in English, it includes material in over 50 other languages, including Catalan, Chinese, Dutch, Finnish, French, German, Italian,
先简单的看了下文章,基本单词间的分隔都是空格,或者是逗号加空格
方法一
1、先打开文档,进行初步分析
- "font-size:18px;">def read_file():
- f=open('F:\\Python\\testfile\\test_3.txt')
- readline=f.readlines()
- word=[]
-
-
-
- for line in readline:
-
- line=line.replace(',','')
- line=line.strip()
- wo=line.split(' ')
-
- word.extend(wo)
- return word
2、简单的清理和排序
- "font-size:18px;">def clear_account(lists):
-
- wokey={}
- wokey=wokey.fromkeys(lists)
- word_1=list(wokey.keys())
-
- for i in word_1:
- wokey[i]=lists.count(i)
- return wokey
3、对单词进行排序(因为我们存入的是一个字典,所以其实是对字典排序)
- <span style="font-size:18px;">def sort_1(wokey):
- #删除''字符
- del[wokey['']]#因为我发现字典中存在空元素,所以删去
- #排序,按values进行排序,如果是按key进行排序用sorted(wokey.items(),key=lambda d:d[0],reverse=True)
- wokey_1={}
- wokey_1=sorted(wokey.items(),key=lambda d:d[1],reverse=True)
- #得到的是一个列表,里面的元素为元组,所以再把他转化为字典,不过不转化也可以
- wokey_1=dict(wokey_1)
- return wokey_1span>
4、输出结果
- "font-size:18px;">def main(wokey_1):
-
- i=0
- for x,y in wokey_1.items():
- if i<10:
- print('the word is "','{}'.format(x),'"')
- i+=1
- continue
- else:
- break
5、运行程序得出结果
main(sort_1(clear_account(read_file())))
运行截图:
本程序到这里就结束了、下面是整个程序代码:
- "font-size:18px;">
-
- def read_file():
- f=open('F:\\Python\\testfile\\test_3.txt')
- readline=f.readlines()
- word=[]
-
-
-
- for line in readline:
-
- line=line.replace(',','')
- line=line.strip()
- wo=line.split(' ')
-
- word.extend(wo)
- return word
- def clear_account(lists):
-
- wokey={}
- wokey=wokey.fromkeys(lists)
-
- word_1=list(wokey.keys())
-
- for i in word_1:
- wokey[i]=lists.count(i)
- return wokey
- def sort_1(wokey):
-
- del[wokey['']]
-
- wokey_1={}
- wokey_1=sorted(wokey.items(),key=lambda d:d[1],reverse=True)
-
- wokey_1=dict(wokey_1)
- return wokey_1
- def main(wokey_1):
-
- i=0
- for x,y in wokey_1.items():
- if i<10:
- print('the word is "','{}'.format(x),'"')
- i+=1
- continue
- else:
- break
-
- main(sort_1(clear_account(read_file())))
-
不过值得注意的是,字典的排序很重要。
并且上面只给出了一种字典的排序,还有其他的排序比如:
- "font-size:18px;">a={'d':2,'f':4,'m':3}
-
- b=a.keys()
- c=a.values()
- d=zip(b,c)
-
- print(sorted(d,reverse=False))
更多的就只有同学们自己去了解了。。
方法二
下面的方法为Python Spark下的单词统计,这种方法不建议用在少量单词的计算上(spark为大数据集群计算平台,不做大数据的同学就别花时间去学了,还要下载spark,hadoop,和配环境)
-
- import sys
- from pyspark import SparkConf, SparkContext
- try:
- conf = SparkConf().setMaster("local").setAppName("My App")
- sc = SparkContext(conf = conf)
- print("connect success")
- except Exception as e:
- print("error",e)
- sys.exit(1)
-
-
-
-
-
-
-
-
-
-
- try:
- lines=sc.textFile("F:/python/testfile/test_3_1.txt")
- words=lines.flatMap(lambda x: x.split(" "))
- print(words.collect())
- result=words.map(lambda x:(x,1)).reduceByKey(lambda x,y:x+y)
- print(result.collect())
- dict_1={}
- for word in result.collect():
- dict_1[word[0]]=word[1]
- print(dict_1)
- print("242343")
- list_2=sorted(dict_1.items(),key=lambda d:d[1],reverse=True)
- print(list_2)
-
- i=0
- for x in list_2:
- if i<10:
- print("word: ",x[0]," times: ",x[1])
- i+=1
- continue
- else:
- break
- except Exception as e:
- print(e)
单词统计的话,应该还有其他的方法,比如NLTK,(想了解自然语言处理的可以学一下)。
学无止境嘛!