推荐系统 Implementation(1)处理数据集

入坑了推荐系统,第一步实现一下离线算法。折腾了一天,总结一下如何利用python处理网络数据集。

  1. 导入库:
import random
import heapq # implementation of the heap queue algorithm
import datetime
import math
import time
import networkx as nx # a library for studying graphs and networks
import argparse  # makes it easy to write user-friendly command-line interfaces.
import matplotlib.pyplot as plt # create figures
import pickle # serialize and de-serialize a Python object structure
import numpy as np # the fundamental package for scientific computing 
import operator # exports a set of efficient functions corresponding to the intrinsic operators of Python
  1. 设置文件路径
file_address_5 = '../datasets/CitHep/cithepEdges.txt'
save_dir = '../datasets/CitHep/'

../ 表示现在的文件夹的上一级文件夹,./表示当前文件夹

  1. 读取数据集:
NodeDegree = {}

with open(file_address_5) as f:
    for line in f:
        if line[0] != '#':
            data = line.split('\t')
            u = int(data[0])
            v = int(data[1])
            if u not in NodeDegree:
                NodeDegree[u] = 1
            else:
                NodeDegree[u]  +=1
            if v not in NodeDegree:
                NodeDegree[v] = 1
            else:
                NodeDegree[v]  +=1
推荐系统 Implementation(1)处理数据集_第1张图片

数据集保存的是图中的边的关系,NodeDegree是一个字典,存放了所有涉及到的节点,以及他们的出边入边个数。

  1. 保存节点集
FinalNodeList =[]
FinalNodeDegree  = {}
max_degree = 6000
min_degree = 0

for key in NodeDegree:
    if NodeDegree[key] <= max_degree and NodeDegree[key] >= min_degree:
        FinalNodeList.append(key)
        FinalNodeDegree[key] = NodeDegree[key]

并序列化到本地

pickle.dump( FinalNodeList, open(save_dir+'NodesDegree'+str(max_degree)+'_'+str(min_degree)+'.list', "wb" ))
  1. 删减数据集
NodeNum = len(NodeList)
print(NodeNum)
Small_NodeList = [NodeList[i] for i in sorted(random.sample(range(len(NodeList)), NodeNum//6))]
NodeList = Small_NodeList
print(len(NodeList))
pickle.dump(NodeList, open(save_dir+'Small_NodeList.list', "wb" ))

sorted 语法:
sorted(iterable, cmp=None, key=None, reverse=False)

random.sample(sequence, k) 含义:
Parameters: sequence: Can be a list, tuple, string, or set. k: An Integer value, it specify the length of a sample.
Returns : k length new list of elements chosen from the sequence.
随机选取了1/6的结点,存入NodeList

file_address = save_dir+'cithepEdges.txt'
# start = time.time()
G = nx.DiGraph()
# print('Start Reading')
with open(file_address) as f:
    for line in f:
        if line[0] != '#':
            u, v = list(map(int, line.split('\t')))
            if u in NodeList and v in NodeList:
                try:
                    G[u][v]['weight'] += 1
                except:
                    G.add_edge(u,v, weight=1)
                try:
                    G[v][u]['weight'] += 1
                except:
                    G.add_edge(v, u, weight=1)
# print('Start Dumping')
# print(len(G.nodes()), len(G.edges()))
pickle.dump( G, open(save_dir+'Small_Final_SubG.G', "wb" ))
# print('Built Flixster graph G', time.time() - start, 's')

基本的networkx知识:
根据定义,a Graph 是一组节点(顶点)以及已标识的节点对(称为边、链接等)。在NetworkX中,节点可以是任何可哈希对象,例如文本字符串、图像、XML对象、另一个图形、自定义节点对象等。
此处 DiGraph 是一个有向图。

  1. 为每条边随机生成概率
save_dir = '../datasets/CitHep/'

edgeDic = {}
degree = []
G = pickle.load(open(save_dir+'Small_Final_SubG.G', 'rb'))

for u in G.nodes():
    for v in G[u]:
        prob = random.uniform(0,0.1)
        edgeDic[(u,v)] = prob
        
pickle.dump(edgeDic, open(save_dir+'Probability.dic', "wb" ))

Probability 以字典格式存放,key部分是一个有向边的tuplevalue是这条边的概率,举例如下:

推荐系统 Implementation(1)处理数据集_第2张图片

注:

  1. 数据集 https://snap.stanford.edu/data/cit-HepTh.html
  2. 程序参考:KDD2019 oim论文实现

你可能感兴趣的:(推荐系统 Implementation(1)处理数据集)