Task1 论文数据统计

任务说明

  • 任务主题:论文数量统计,即统计2019年全年计算机各个方向论文数量;
  • 任务内容:赛题的理解、使用 Pandas 读取数据并进行统计;
  • 任务成果:学习 Pandas 的基础操作;
  • 可参考的学习资料:开源组织Datawhale joyful-pandas项目

数据集介绍

  • 数据集来源:数据集链接;
  • 或者使用2019年的数据:2019数据集
  • 数据集的格式如下:
    • id:arXiv ID,可用于访问论文;
    • submitter:论文提交者;
    • authors:论文作者;
    • title:论文标题;
    • comments:论文页数和图表等其他信息;
    • journal-ref:论文发表的期刊的信息;
    • doi:数字对象标识符,https://www.doi.org;
    • report-no:报告编号;
    • categories:论文在 arXiv 系统的所属类别或标签;
    • license:文章的许可证;
    • abstract:论文摘要;
    • versions:论文版本;
    • authors_parsed:作者的信息。
"root":{
		"id":string"0704.0001"
		"submitter":string"Pavel Nadolsky"
		"authors":string"C. Bal\'azs, E. L. Berger, P. M. Nadolsky, C.-P. Yuan"
		"title":string"Calculation of prompt diphoton production cross sections at Tevatron and LHC energies"
		"comments":string"37 pages, 15 figures; published version"
		"journal-ref":string"Phys.Rev.D76:013009,2007"
		"doi":string"10.1103/PhysRevD.76.013009"
		"report-no":string"ANL-HEP-PR-07-12"
		"categories":string"hep-ph"
		"license":NULL
		"abstract":string"  A fully differential calculation in perturbative quantum chromodynamics is presented for the production of massive photon pairs at hadron colliders. All next-to-leading order perturbative contributions from quark-antiquark, gluon-(anti)quark, and gluon-gluon subprocesses are included, as well as all-orders resummation of initial-state gluon radiation valid at next-to-next-to leading logarithmic accuracy. The region of phase space is specified in which the calculation is most reliable. Good agreement is demonstrated with data from the Fermilab Tevatron, and predictions are made for more detailed tests with CDF and DO data. Predictions are shown for distributions of diphoton pairs produced at the energy of the Large Hadron Collider (LHC). Distributions of the diphoton pairs from the decay of a Higgs boson are contrasted with those produced from QCD processes at the LHC, showing that enhanced sensitivity to the signal can be obtained with judicious selection of events."
		"versions":[
				0:{
						"version":string"v1"
						"created":string"Mon, 2 Apr 2007 19:18:42 GMT"
					}
				1:{
						"version":string"v2"
						"created":string"Tue, 24 Jul 2007 20:10:27 GMT"
					}]
		"update_date":string"2008-11-26"
		"authors_parsed":[
				0:[
						0:string"Balázs"
						1:string"C."
						2:string""]
				1:[
						0:string"Berger"
						1:string"E. L."
						2:string""]
				2:[
						0:string"Nadolsky"
						1:string"P. M."
						2:string""]
				3:[
						0:string"Yuan"
						1:string"C. -P."
						2:string""]]
}

arxiv论文类别介绍

我们从arxiv官网,查询到论文的类别名称以及其解释如下。

链接:https://arxiv.org/help/api/user-manual 的 5.3 小节的 Subject Classifications 的部分,或 https://arxiv.org/category_taxonomy, 具体的153种paper的类别部分如下:

'astro-ph': 'Astrophysics',
'astro-ph.CO': 'Cosmology and Nongalactic Astrophysics',
'astro-ph.EP': 'Earth and Planetary Astrophysics',
'astro-ph.GA': 'Astrophysics of Galaxies',
'cs.AI': 'Artificial Intelligence',
'cs.AR': 'Hardware Architecture',
'cs.CC': 'Computational Complexity',
'cs.CE': 'Computational Engineering, Finance, and Science',
'cs.CV': 'Computer Vision and Pattern Recognition',
'cs.CY': 'Computers and Society',
'cs.DB': 'Databases',
'cs.DC': 'Distributed, Parallel, and Cluster Computing',
'cs.DL': 'Digital Libraries',
'cs.NA': 'Numerical Analysis',
'cs.NE': 'Neural and Evolutionary Computing',
'cs.NI': 'Networking and Internet Architecture',
'cs.OH': 'Other Computer Science',
'cs.OS': 'Operating Systems',

具体代码实现以及讲解

导入package并读取原始数据

# 导入所需的package
import seaborn as sns #用于画图
from bs4 import BeautifulSoup #用于爬取arxiv的数据
import re #用于正则表达式,匹配字符串的模式
import requests #用于网络连接,发送网络请求,使用域名获取对应信息
import json #读取数据,我们的数据为json格式的
import pandas as pd #数据处理,数据分析
import numpy as np # 数据处理
import matplotlib.pyplot as plt #画图工具

这里使用的package的版本如下(至少python 3.7.4):

  • seaborn:0.9.0
  • BeautifulSoup:4.8.0
  • requests:2.22.0
  • json:0.8.5
  • pandas:0.25.1
  • matplotlib:3.1.1
import os
os.getcwd() # get当前路径
'D:\\jupyter_notebook\\Github\\datawhale数据分析_学术前沿趋势分析\\AcademicTrends'
# 读入数据
data  = []

#使用with语句优势:1.自动关闭文件句柄;2.自动显示(处理)文件读取数据异常
with open("./data/arxiv-metadata-oai-2019.json", 'r') as f: 
    for idx, line in enumerate(f): 
        
        # 读取前100行,如果读取所有数据需要8G内存
        if idx >= 100: # 如果idx=100 说明是第101行
            break # 跳出循环
        
        data.append(json.loads(line)) # line是每一条样本信息 字典格式,json.loads读取每一条样本信息,结果是字典
        # data:列表中每条样本是个字典形式
        
data = pd.DataFrame(data) #将list变为dataframe格式,方便使用pandas进行分析
data.shape #显示数据大小
(100, 14)
data.head() #显示数据的前五行
id submitter authors title comments journal-ref doi report-no categories license abstract versions update_date authors_parsed
0 0704.0297 Sung-Chul Yoon Sung-Chul Yoon, Philipp Podsiadlowski and Step... Remnant evolution after a carbon-oxygen white ... 15 pages, 15 figures, 3 tables, submitted to M... None 10.1111/j.1365-2966.2007.12161.x None astro-ph None We systematically explore the evolution of t... [{'version': 'v1', 'created': 'Tue, 3 Apr 2007... 2019-08-19 [[Yoon, Sung-Chul, ], [Podsiadlowski, Philipp,...
1 0704.0342 Patrice Ntumba Pungu B. Dugmore and PP. Ntumba Cofibrations in the Category of Frolicher Spac... 27 pages None None None math.AT None Cofibrations are defined in the category of ... [{'version': 'v1', 'created': 'Tue, 3 Apr 2007... 2019-08-19 [[Dugmore, B., ], [Ntumba, PP., ]]
2 0704.0360 Zaqarashvili T.V. Zaqarashvili and K Murawski Torsional oscillations of longitudinally inhom... 6 pages, 3 figures, accepted in A&A None 10.1051/0004-6361:20077246 None astro-ph None We explore the effect of an inhomogeneous ma... [{'version': 'v1', 'created': 'Tue, 3 Apr 2007... 2019-08-19 [[Zaqarashvili, T. V., ], [Murawski, K, ]]
3 0704.0525 Sezgin Ayg\"un Sezgin Aygun, Ismail Tarhan, Husnu Baysal On the Energy-Momentum Problem in Static Einst... This submission has been withdrawn by arXiv ad... Chin.Phys.Lett.24:355-358,2007 10.1088/0256-307X/24/2/015 None gr-qc None This paper has been removed by arXiv adminis... [{'version': 'v1', 'created': 'Wed, 4 Apr 2007... 2019-10-21 [[Aygun, Sezgin, ], [Tarhan, Ismail, ], [Baysa...
4 0704.0535 Antonio Pipino Antonio Pipino (1,3), Thomas H. Puzia (2,4), a... The Formation of Globular Cluster Systems in M... 32 pages (referee format), 9 figures, ApJ acce... Astrophys.J.665:295-305,2007 10.1086/519546 None astro-ph None The most massive elliptical galaxies show a ... [{'version': 'v1', 'created': 'Wed, 4 Apr 2007... 2019-08-19 [[Pipino, Antonio, ], [Puzia, Thomas H., ], [M...
data.columns # 所有列名
Index(['id', 'submitter', 'authors', 'title', 'comments', 'journal-ref', 'doi',
       'report-no', 'categories', 'license', 'abstract', 'versions',
       'update_date', 'authors_parsed'],
      dtype='object')

读取原始数据的封装函数

def readArxivFile(path, columns=['id', 'submitter', 'authors', 'title', 'comments', 'journal-ref', 'doi',
       'report-no', 'categories', 'license', 'abstract', 'versions',
       'update_date', 'authors_parsed'], count=None):
    '''
    定义读取文件的函数
        path: 文件相对路径
        columns: 需要选择的列(不一定需要所有的列)
        count: 读取行数(原数据有17万+行)
    '''
    
    data  = []
    with open(path, 'r') as f: 
        for idx, line in enumerate(f): 
            if idx == count: # 索引从0开始,所以idx=count-->已经是第count+1条数据
                break
                
            # 读取每一行数据
            d = json.loads(line) # **关心所有列**--原始的样本:包含所有列的字典形式
            d = {
     col : d[col] for col in columns} # **关心其中某几列**--用字典生成式,key=列名,value=每条样本中对应列名的值 # 如果需要所有列,直接json.loads就行
            #print(d)
            data.append(d)

    data = pd.DataFrame(data)
    return data
readArxivFile('./data/arxiv-metadata-oai-2019.json',count=100).shape
(100, 14)
# 只提取全数据的其中三列
data = readArxivFile('./data/arxiv-metadata-oai-2019.json',['id', 'categories', 'update_date'])
data.shape
(170618, 3)
data
id categories update_date
0 0704.0297 astro-ph 2019-08-19
1 0704.0342 math.AT 2019-08-19
2 0704.0360 astro-ph 2019-08-19
3 0704.0525 gr-qc 2019-10-21
4 0704.0535 astro-ph 2019-08-19
... ... ... ...
170613 quant-ph/9904032 quant-ph 2019-08-17
170614 solv-int/9511005 solv-int nlin.SI 2019-08-15
170615 solv-int/9809008 solv-int nlin.SI 2019-08-17
170616 solv-int/9909010 solv-int adap-org hep-th nlin.AO nlin.SI 2019-08-17
170617 solv-int/9909014 solv-int nlin.SI 2019-08-21

170618 rows × 3 columns

统计论文类别出现的次数

直接用value_counts

pd.DataFrame(data.categories.value_counts()) 
# 直接用value_counts不对,因为有的论文属于好几个类别--value counts只会把整体看成一个新的类别,但实际上应该分开
categories
cs.CV 5559
quant-ph 3470
cs.LG stat.ML 3247
math.AP 3025
math.CO 2601
... ...
cs.LG cs.AR cs.DC cs.NE 1
nucl-th cs.LG 1
q-bio.PE cs.CE 1
cs.CV cs.LG eess.AS stat.ML 1
cond-mat.mes-hall math.RA quant-ph 1

15592 rows × 1 columns

用正则表达式进行categories字段的拆解

  • re.split(r"\s+",string)
data.categories.values # 单独取一列:Series;用.values得到array
array(['astro-ph', 'math.AT', 'astro-ph', ..., 'solv-int nlin.SI',
       'solv-int adap-org hep-th nlin.AO nlin.SI', 'solv-int nlin.SI'],
      dtype=object)
data.categories.values[170616] # 只能用实际index进行索引,不能用-1
'solv-int adap-org hep-th nlin.AO nlin.SI'
re.split(r"\s+",data.categories.values[170616]) # 用空格(1+个)进行分割
['solv-int', 'adap-org', 'hep-th', 'nlin.AO', 'nlin.SI']
# 把每条样本的类别从用空格分隔的string变成list中的每个元素--所有样本的类别再都放在一个list中
# 形如:[[样本1的类别1,样本1的类别2],[样本2的类别]...]
categories_result = []
for i in data.categories.values: # 本身是series,用.values得到array
    categories_result.append(re.split(r"\s+",i)) # re.split的结果是个list--list套list的嵌套形式
categories_result # 列表的嵌套
[['astro-ph'],
 ['math.AT'],
 ['astro-ph'],
 ['gr-qc'],
 ['astro-ph'],...]
categories_result[12]
['cond-mat.str-el', 'cond-mat.mes-hall']

让list的2D嵌套形式,变成1D的单个list

[s for l in categories_result for s in l]
['astro-ph',
 'math.AT',
 'astro-ph',
 'gr-qc',
 'astro-ph',
 'nucl-ex',
 'quant-ph',
 'math.DG',
 'hep-ex',
 'astro-ph',
 'hep-ex',
 'astro-ph',
 'cond-mat.str-el',
 'cond-mat.mes-hall',
 'astro-ph',...]
# 2019所有论文的类别数量-
unique_category = set([i for l in categories_result for i in l]) 
# l是每篇论文的类别list--可能包含多个类别;i是list l中的每个元素--string 论文类别
# 列表生成式:把所有的i放在一个list中-->一维的list
# 用set集合去重--取出所有论文中unique的类别
len(unique_category) #172个unique类别 VS 官网上有153个类别
172

数据预处理

首先我们先来粗略统计论文的种类信息:

  • count:一列数据的元素个数;
  • unique:一列数据中元素的种类;
  • top:一列数据中出现频率最高的元素;
  • freq:一列数据中出现频率最高的元素的个数;
data["categories"].describe()
count     170618
unique     15592
top        cs.CV
freq        5559
Name: categories, dtype: object

以上的结果表明:共有170618个数据,有15592个子类(因为有论文的类别是多个,例如一篇paper的类别是CS.AI & CS.MM和一篇paper的类别是CS.AI & CS.OS属于不同的子类别,这里仅仅是粗略统计),其中最多的种类是cs.CV,即Computer Vision and Pattern Recognition(计算机视觉),共出现了 5559次。

cs.CV在arxiv上的定义:Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5

由于部分论文的类别不止一种,所以下面我们判断在本数据集中共出现了多少种独立的数据集。

string.split(" ")

  • 不保险:因为不能保证同一篇论文的不同类别之间只有一个空格分隔–如果格式不确定:用re.split
unique_categories = set([i for l in [x.split(' ') for x in data["categories"]] for i in l])
unique_categories # 和之前的unique_category一样
len(unique_categories)
172

这里使用了 split 函数将多类别使用 “ ”(空格)分开,组成list,并使用 for 循环将独立出现的类别找出来,并使用 set 类别,将重复项去除得到最终所有的独立paper种类。

从以上结果发现,共有172种论文种类,比我们直接从 https://arxiv.org/help/api/user-manual 的 5.3 小节的 Subject Classifications 的部分或 https://arxiv.org/category_taxonomy中的到的类别多(153个)

我们的任务要求对于2019年以后的paper进行分析(但这里为了内存和运行速度,只用了2019年的数据),所以首先对于时间特征进行预处理,从而得到2019年以后的所有种类的论文:

data["year"] = pd.to_datetime(data["update_date"]).dt.year #将update_date从例如2019-02-20的str变为datetime格式,并提取出year
del data["update_date"] #删除 update_date特征,其使命已完成
data = data[data["year"] >= 2019] #找出 year 中2019年以后的数据,并将其他数据删除--我们这里只用了2019的数据 所以全部数据的year>=2019的,仍然取出所有数据170618行

# data.groupby(['categories','year']) #以 categories 进行排序,如果同一个categories 相同则使用 year 特征进行排序
data.reset_index(drop=True, inplace=True) #重新编号--这里不用重置索引 因为没有数据增加/减少
data #查看结果
id categories year
0 0704.0297 astro-ph 2019
1 0704.0342 math.AT 2019
2 0704.0360 astro-ph 2019
3 0704.0525 gr-qc 2019
4 0704.0535 astro-ph 2019
... ... ... ...
170613 quant-ph/9904032 quant-ph 2019
170614 solv-int/9511005 solv-int nlin.SI 2019
170615 solv-int/9809008 solv-int nlin.SI 2019
170616 solv-int/9909010 solv-int adap-org hep-th nlin.AO nlin.SI 2019
170617 solv-int/9909014 solv-int nlin.SI 2019

170618 rows × 3 columns

这里我们就已经得到了所有2019年以后的论文,下面我们挑选出计算机领域内的所有文章:

#爬取所有的类别
website_url = requests.get('https://arxiv.org/category_taxonomy').text #获取网页的文本数据
soup = BeautifulSoup(website_url,'lxml') #爬取数据,这里使用lxml的解析器,加速
root = soup.find('div',{
     'id':'category_taxonomy_list'}) #找出 BeautifulSoup 对应的标签入口
tags = root.find_all(["h2","h3","h4","p"], recursive=True) #读取 tags

#初始化 str 和 list 变量
level_1_name = ""
level_2_name = ""
level_2_code = ""
level_1_names = []
level_2_codes = []
level_2_names = []
level_3_codes = []
level_3_names = []
level_3_notes = []

#进行
for t in tags:
    if t.name == "h2":
        level_1_name = t.text    
        level_2_code = t.text
        level_2_name = t.text
    elif t.name == "h3":
        raw = t.text
        level_2_code = re.sub(r"(.*)\((.*)\)",r"\2",raw) #正则表达式:模式字符串:(.*)\((.*)\);被替换字符串"\2";被处理字符串:raw
        level_2_name = re.sub(r"(.*)\((.*)\)",r"\1",raw)
    elif t.name == "h4":
        raw = t.text
        level_3_code = re.sub(r"(.*) \((.*)\)",r"\1",raw)
        level_3_name = re.sub(r"(.*) \((.*)\)",r"\2",raw)
    elif t.name == "p":
        notes = t.text
        level_1_names.append(level_1_name)
        level_2_names.append(level_2_name)
        level_2_codes.append(level_2_code)
        level_3_names.append(level_3_name)
        level_3_codes.append(level_3_code)
        level_3_notes.append(notes)

#根据以上信息生成dataframe格式的数据
df_taxonomy = pd.DataFrame({
     
    'group_name' : level_1_names,
    'archive_name' : level_2_names,
    'archive_id' : level_2_codes,
    'category_name' : level_3_names,
    'categories' : level_3_codes,
    'category_description': level_3_notes
    
})

#按照 "group_name" 进行分组,在组内使用 "archive_name" 进行排序
df_taxonomy.groupby(["group_name","archive_name"])
df_taxonomy
group_name archive_name archive_id category_name categories category_description
0 Computer Science Computer Science Computer Science Artificial Intelligence cs.AI Covers all areas of AI except Vision, Robotics...
1 Computer Science Computer Science Computer Science Hardware Architecture cs.AR Covers systems organization and hardware archi...
2 Computer Science Computer Science Computer Science Computational Complexity cs.CC Covers models of computation, complexity class...
3 Computer Science Computer Science Computer Science Computational Engineering, Finance, and Science cs.CE Covers applications of computer science to the...
4 Computer Science Computer Science Computer Science Computational Geometry cs.CG Roughly includes material in ACM Subject Class...
... ... ... ... ... ... ...
150 Statistics Statistics Statistics Computation stat.CO Algorithms, Simulation, Visualization
151 Statistics Statistics Statistics Methodology stat.ME Design, Surveys, Model Selection, Multiple Tes...
152 Statistics Statistics Statistics Machine Learning stat.ML Covers machine learning papers (supervised, un...
153 Statistics Statistics Statistics Other Statistics stat.OT Work in statistics that does not fit into the ...
154 Statistics Statistics Statistics Statistics Theory stat.TH stat.TH is an alias for math.ST. Asymptotics, ...

155 rows × 6 columns

这里主要说明一下上面代码中的正则操作,这里我们使用re.sub来用于替换字符串中的匹配项

  • pattern : 正则中的模式字符串。
  • repl : 替换的字符串,也可为一个函数。
  • string : 要被查找替换的原始字符串。
  • count : 模式匹配后替换的最大次数,默认 0 表示替换所有的匹配。
  • flags : 编译时用的匹配模式,数字形式。
  • 其中pattern、repl、string为必选参数

re.sub(pattern, repl, string, count=0, flags=0)

实例如下:

# r'#.*$' 匹配#开始,匹配任意字符重复0/1+次(任意次)一直匹配到结尾
re.sub(r'#.*$', "", phone) # 但sub是只要字符串里面有匹配的pattern即可
'2004-959-559 '
import re

phone = "2004-959-559 # 这是一个电话号码"
 
# 删除注释
num = re.sub(r'#.*$', "", phone)
print ("电话号码 : ", num)
电话号码 :  2004-959-559 
if re.match(r'#.*$',phone): # match是从字符串的最开始判断是否符合pattern--不符合,因为不是#开头
    print("ok")
else:
    print("failed") # 结果是failed
# 移除非数字的内容
num = re.sub(r'\D', "", phone)
print ("电话号码 : ", num)
电话号码 :  2004959559

详细了解可以参考:https://www.runoob.com/python3/python3-reg-expressions.html

对于我们的代码来说:

re.sub(r"(.*)\((.*)\)",r"\2", " Astrophysics(astro-ph)")
'astro-ph'

对应的参数

  • 正则中的模式字符串 pattern 的格式为 “任意字符” + “(” + “任意字符” + “)”。
  • 替换的字符串 repl 为第2个分组的内容。
  • 要被查找替换的原始字符串 string 为原始的爬取的数据。

这里推荐大家一个在线正则表达式测试的网站:https://tool.oschina.net/regex/

数据分析及可视化

接下来我们首先看一下所有大类的paper数量分布:

我们使用merge函数,以两个dataframe共同的属性 “categories” 进行合并,并以 “group_name” 作为类别进行统计,统计结果放入 “id” 列中并排序。

data1 = data.merge(df_taxonomy,on="categories",how="left").drop_duplicates(["id","group_name"]).groupby("group_name").agg({
     "id":"count"})
data1
# group_name:里面有NaN--但是group by的时候自动去掉了
id
group_name
Computer Science 18087
Economics 173
Electrical Engineering and Systems Science 1371
Mathematics 24495
Physics 38379
Quantitative Biology 886
Quantitative Finance 352
Statistics 1802
data2 = data1.sort_values("id",ascending=False)
data2
id
group_name
Physics 38379
Mathematics 24495
Computer Science 18087
Statistics 1802
Electrical Engineering and Systems Science 1371
Quantitative Biology 886
Quantitative Finance 352
Economics 173

可视化–类别的饼图

data2.sum(0)
id    85545
dtype: int64
data2.div(data2.sum(0),axis=1).iloc[:,0]
group_name
Physics                                       0.448641
Mathematics                                   0.286341
Computer Science                              0.211433
Statistics                                    0.021065
Electrical Engineering and Systems Science    0.016027
Quantitative Biology                          0.010357
Quantitative Finance                          0.004115
Economics                                     0.002022
Name: id, dtype: float64
data2.index # 饼图的labels参数
Index(['Physics', 'Mathematics', 'Computer Science', 'Statistics',
       'Electrical Engineering and Systems Science', 'Quantitative Biology',
       'Quantitative Finance', 'Economics'],
      dtype='object', name='group_name')
# 调整字体大小
import matplotlib.pylab as pylab
params = {
     "axes.titlesize": "xx-large" } # 还是不够大 所以用方法二!
pylab.rcParams.update(params)
#Valid font sizes are xx-small, x-small, small, medium, large, x-large, xx-large, smaller, larger.
# 方法二
import matplotlib as mpl
mpl.rcParams["font.size"] = 15 # 只能控制画布上的参数
# 用饼图可视化
fig,ax = plt.subplots(1,1,figsize=(24,24))
labels = data2.index

explodes = (0, 0, 0, 0.2, 0.3, 0.3, 0.2, 0.1) 
ax.pie(data2.div(data2.sum(0),axis=1).iloc[:,0],explode=explodes,labels = labels,
      autopct = "%1.2f%%",startangle=0,textprops={
     "fontsize":30}); # X必须是1D的数据 只能是一列数据 #textprops控制图形上的参数
ax.set_title("categories percentage");
# 参数
# startangle : float, default: 0 degrees
# The angle by which the start of the pie is rotated,counterclockwise from the x-axis.

Task1 论文数据统计_第1张图片

_df = data.merge(df_taxonomy, on="categories", how="left").drop_duplicates(["id","group_name"]).groupby("group_name").agg({
     "id":"count"}).sort_values(by="id",ascending=False).reset_index()

_df

group_name id
0 Physics 79985
1 Mathematics 51567
2 Computer Science 40067
3 Statistics 4054
4 Electrical Engineering and Systems Science 3297
5 Quantitative Biology 1994
6 Quantitative Finance 826
7 Economics 576

下面我们使用饼图进行上图结果的可视化:

fig = plt.figure(figsize=(15,12))
explode = (0, 0, 0, 0.2, 0.3, 0.3, 0.2, 0.1) 
plt.pie(_df["id"],  labels=_df["group_name"], autopct='%1.2f%%', startangle=160, explode=explode)
plt.tight_layout()
plt.show()

Task1 论文数据统计_第2张图片

下面统计在数学Mathematics各个子领域2019年后的paper数量,我们同样使用 merge 函数,对于两个dataframe 共同的特征 categories 进行合并并且进行查询。然后我们再对于数据进行统计和排序从而得到以下的结果:

group_name="Mathematics"
cats = data.merge(df_taxonomy, on="categories").query("group_name == @group_name")
cats
# category_name是子类别的全称
# categories是子类别的简写
id categories year group_name archive_name archive_id category_name category_description
0 0704.0342 math.AT 2019 Mathematics Mathematics Mathematics Algebraic Topology Homotopy theory, homological algebra, algebrai...
1 0902.1274 math.AT 2019 Mathematics Mathematics Mathematics Algebraic Topology Homotopy theory, homological algebra, algebrai...
2 1104.5331 math.AT 2019 Mathematics Mathematics Mathematics Algebraic Topology Homotopy theory, homological algebra, algebrai...
3 1203.5288 math.AT 2019 Mathematics Mathematics Mathematics Algebraic Topology Homotopy theory, homological algebra, algebrai...
4 1209.1240 math.AT 2019 Mathematics Mathematics Mathematics Algebraic Topology Homotopy theory, homological algebra, algebrai...
... ... ... ... ... ... ... ... ...
64899 1912.03519 math.GN 2019 Mathematics Mathematics Mathematics General Topology Continuum theory, point-set topology, spaces w...
64900 1912.03631 math.GN 2019 Mathematics Mathematics Mathematics General Topology Continuum theory, point-set topology, spaces w...
64901 1912.03796 math.GN 2019 Mathematics Mathematics Mathematics General Topology Continuum theory, point-set topology, spaces w...
64902 1912.04214 math.GN 2019 Mathematics Mathematics Mathematics General Topology Continuum theory, point-set topology, spaces w...
64903 1912.11988 math.GN 2019 Mathematics Mathematics Mathematics General Topology Continuum theory, point-set topology, spaces w...

24495 rows × 8 columns

cats.groupby(["year","category_name"]).count().reset_index().pivot(index="category_name", columns="year",values="id")
year 2019
category_name
Algebraic Geometry 1726
Algebraic Topology 386
Analysis of PDEs 3025
Category Theory 134
Classical Analysis and ODEs 803
Combinatorics 2601
Commutative Algebra 370
Complex Variables 490
Differential Geometry 1297
Dynamical Systems 1177
Functional Analysis 1166
General Mathematics 296
General Topology 179
Geometric Topology 685
Group Theory 647
History and Overview 132
K-Theory and Homology 54
Logic 642
Metric Geometry 208
Number Theory 2025
Numerical Analysis 990
Operator Algebras 244
Optimization and Control 1718
Probability 1908
Quantum Algebra 165
Representation Theory 599
Rings and Algebras 537
Spectral Theory 124
Symplectic Geometry 167
# Mathematics这类中 Analysis of PDEs这一子类的论文数量最多
# Existence and uniqueness, boundary conditions, linear and non-linear operators, stability, soliton theory, integrable PDE's, conservation laws, qualitative dynamics

统计在计算机Computer Science各个子领域2019年后的paper数量

group_name="Computer Science"
cats = data.merge(df_taxonomy, on="categories").query("group_name == @group_name")
cats.groupby(["year","category_name"]).count().reset_index().pivot(index="category_name", columns="year",values="id") 

year 2019 2020
category_name
Artificial Intelligence 558 757
Computation and Language 2153 2906
Computational Complexity 131 188
Computational Engineering, Finance, and Science 108 205
Computational Geometry 199 216
Computer Science and Game Theory 281 323
Computer Vision and Pattern Recognition 5559 6517
Computers and Society 346 564
Cryptography and Security 1067 1238
Data Structures and Algorithms 711 902
Databases 282 342
Digital Libraries 125 157
Discrete Mathematics 84 81
Distributed, Parallel, and Cluster Computing 715 774
Emerging Technologies 101 84
Formal Languages and Automata Theory 152 137
General Literature 5 5
Graphics 116 151
Hardware Architecture 95 159
Human-Computer Interaction 420 580
Information Retrieval 245 331
Logic in Computer Science 470 504
Machine Learning 177 538
Mathematical Software 27 45
Multiagent Systems 85 90
Multimedia 76 66
Networking and Internet Architecture 864 783
Neural and Evolutionary Computing 235 279
Numerical Analysis 40 11
Operating Systems 36 33
Other Computer Science 67 69
Performance 45 51
Programming Languages 268 294
Robotics 917 1298
Social and Information Networks 202 325
Software Engineering 659 804
Sound 7 4
Symbolic Computation 44 36
Systems and Control 415 133

我们可以从结果看出,Computer Vision and Pattern Recognition(计算机视觉与模式识别)类是CS中paper数量最多的子类,遥遥领先于其他的CS子类,并且paper的数量还在逐年增加;另外,Computation and Language(计算与语言)、Cryptography and Security(密码学与安全)以及 Robotics(机器人学)的2019年paper数量均超过1000或接近1000,这与我们的认知是一致的。

你可能感兴趣的:(数据分析)