旧版本数据,已弃用,新数据将超大CSV分割成数个小于1G的小文件,平均每个文件80万条数据
以2020-06-08-weekly-patterns.csv为例:
文件大小4.37GB,内容时间段20200608-20200615。
共25个字段,mysql上统计是3819825条数据,pandas上统计是3819697条数据,WPS打开只能看到1048576条数据。
正常环境下使用python的pandas加载需要60秒以上。
使用sublime text3打开体感需要20秒~40秒。
使用WPS加载体感需要40秒以上,并且打开多个类似大小或几十万条数据的csv时,WPS会无法成功打开,并且向下滚动滚轮会卡屏。
使用MySQL数据库服务的话,数据库装在笔记本上,使用标准配置,即按照开发环境配置数据库,数据库运行时内存占用会较低,使用Navicat v11.0导入.csv文件需要很长的时间,目前50分钟已处理226万条数据。占进度条的59%。
#其中一条数据的插入语句:
INSERT INTO `safegraph`.`2020-06-08-weekly-patterns` (`safegraph_place_id`, `location_name`, `street_address`, `city`, `region`, `postal_code`, `iso_country_code`, `safegraph_brand_ids`, `brands`, `date_range_start`, `date_range_end`, `raw_visit_counts`, `raw_visitor_counts`, `visits_by_day`, `visits_by_each_hour`, `poi_cbg`, `visitor_home_cbgs`, `visitor_daytime_cbgs`, `visitor_country_of_origin`, `distance_from_home`, `median_dwell`, `bucketed_dwell_times`, `related_same_day_brand`, `related_same_week_brand`, `device_type`) VALUES ('sg:0f732233b4f146b09ce2398e02063b47', 'The Art of Shaving', '55 W 49th St', 'New York', 'NY', '10112', 'US', 'SG_BRAND_2b2de6e5e806bd0b', 'The Art of Shaving', '2020-06-08T00:00:00-04:00', '2020-06-15T00:00:00-04:00', '22', '17', '[5,3,4,6,2,0,2]', '[1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,1,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1]', '360610104001', '{\"340390349002\":4,\"360610022014\":4,\"340130158003\":4}', '{\"340390348004\":4}', '{\"US\":16}', '12214', '73.5', '{\"<5\":0,\"5-20\":6,\"21-60\":4,\"61-240\":9,\">240\":3}', '{\"Cole Haan\":25,\"Build A Bear Workshop\":25}', '{\"sweetgreen\":23,\"Sports Clubs Network\":18,\"Subway\":14,\"by CHLOE\":12,\"Hale and Hearty\":12,\"LOFT\":12,\"Build A Bear Workshop\":12,\"Bed Bath & Beyond\":12,\"Dunkin\'\":10,\"Wendy\'s\":9,\"Starbucks\":7,\"Topshop\":6,\"Haru Sushi\":6,\"Melt Shop\":6,\"Amazon Go\":6,\"Godiva\":6,\"Cole Haan\":6,\"LEGO\":6,\"Del Frisco\'s Grille\":6,\"ALDO\":6}', '{\"android\":7,\"ios\":11}');
数据中目前可见的问题:
df1=df[df["safegraph_place_id"].str.contains("sg:0f732233b4f146b09ce2398e02063b47")]#结果显示[1 rows x 25 columns]
因此决定使用jupyter notebook+python进行数据清洗
import pandas as pd
import time
fileLocation='D:/2020-06-08-weekly-patterns.csv'
timee=time.process_time()
df=pd.read_csv(fileLocation)
print(time.process_time()-timee)
timee=time.process_time()
df1=df[df["safegraph_place_id"].str.contains("sg:0f732233b4f146b09ce2398e02063b47")]
print(time.process_time()-timee)
timee=time.process_time()
# 数据进行清洗后保存成.csv文件
# now = time.strftime("%Y-%m-%d-%H_%M_%S",time.localtime(time.time()))
# fname="D:/safegraph_CSV/weekly-patterns"+now+".csv"
# df1.to_csv(fname)
print(df1)
未完待续…