PN_IB是2011-03-1到2019-10-01的数据,以月为分隔符
PN_usage是2011-06-25到2017-08-05的数据,以周为分割
long_term_pred_results 2014-10-04到2018-03-31,以周为间隔,预测的是IB量
文件名 | 功能 |
---|---|
Demo_code.py | 相当于main函数,做预处理,调用long_term_pred.py |
Long_term_pred.py | 长期预测函数,调用Anomaly_Detecion.py |
Anomaly_Detection.py | 异常点检测函数,调用detect_anoms函数 |
detect_anoms.py | 实现异常点检测ESD算法 |
Outlier_replace.py | 异常点替换函数,用卡尔曼滤波方法 |
语言 | 函数名 | 函数功能 | 函数相关链接,备注 |
---|---|---|---|
R | mdy() | 把月日年的日期形式转化为年月日的形式 | https://www.rdocumentation.org/packages/splusTimeDate/versions/2.5.0-137/topics/mdy;mdy安装方法:https://www.rdocumentation.org/packages/lubridate/versions/1.7.3 |
Python | pd.to_datetime(pd.Series,format=’%m/%d/%Y’) | 把format规定格式的日期转化为XXXX-XX-XX的年月日格式 | |
R | seq(time.min, time.max, by=”week”) | 以时间最小值为起点,时间最大值为终点,周为间隔生成时间序列 | |
Python | start = PN_usage[‘date’].min() end = PN_usage[‘date’].max() print(“enter if”) sevendayscnt=int(((end-start).days)/7+1) fulldate=[] for i in range(0,sevendayscnt): fulldate.append(start + datetime.timedelta(days=7* i) ) |
生成从某日到某日以7天为分隔的时间序列 | 没有现成的函数可以实现,自己DIY |
R | data_decomp <- stl(ts(data[[2L]], frequency = num_obs_per_period), s.window = “periodic”, robust = TRUE) | 把时间序列分解成趋势、季节性、噪声三部分 | |
Python | decomposition = sm.tsa.seasonal_decompose(data.set_index(‘date’), model=”additive”, filt=None, freq=4, two_sided=True) # 分解结果和R分解相比并不是一模一样的 | 时间序列分解成趋势、季节性、噪声三部分 | 画图decomposition.plot() |
R | query<-reinterpolate(PN_IB[[2L]], new.length = round(dim(PN_IB)[1]*4.34524)) | 插值 | |
Python | yinput = y[‘ib’] xinput = np.linspace(start=1, stop=len(y), num=len(y), endpoint=True) xnew = np.linspace(start=1, stop=len(y), num=round(len(y) * 4.34524), endpoint=True) f = interpolate.interp1d(xinput, yinput, kind=’slinear’) query = f(xnew) |
插值 | from scipy import interpolate interp1d函数的kind参数有很多可选值,经试验kind=”slinear”与R的reinterpolate一模一样 |
R | PN_usage[[2L]] <- lowess(PN_usage[[2L]], f=0.2)$y | 局部加权回归,用于平滑曲线 | https://stats.stackexchange.com/questions/125359/lowess-r-python-statsmodels-vs-matlab-biopython |
Python | data[‘usage’] = pd.DataFrame(sm.nonparametric.lowess(exog=list(data[‘usage’].index), endog=list(data[‘usage’]), frac=0.2)).iloc[:, 1] | 局部加权平滑 | import statsmodels.api as sm 函数API说明:http://www.statsmodels.org/stable/generated/statsmodels.nonparametric.smoothers_lowess.lowess.html 函数可视化:https://stackoverflow.com/questions/20618804/how-to-smooth-a-curve-in-the-right-way |
R | reference <- zscore(reference) | zscore变换 | |
Python | reference = stats.zscore(reference, ddof=0) # zsocre的结果,python和R有0.001的误差 | zscore变换 | from scipy import stats |
R | score <- lb_improved(reference, query, window.size = 10, norm = ‘L2’) | 计算两个时间序列之间的距离 | https://www.rdocumentation.org/packages/dtwclust/versions/3.1.1/topics/lb_improved R的阈值是10 |
Python | distance, path = fastdtw(query, reference, radius=20, dist=euclidean) | 计算两个时间序列之间的距离 | 与R的结果很不一样。我设的阈值是35,需要再推敲。 |
R | results <- solve.QP(t(X2) %% X2, t(reference) %% X2, cbind(c(min(y[[2L]]), 1), c(1, 0)), c(0, 0)) | 解二次规划 | 官方API:https://www.rdocumentation.org/packages/quadprog/versions/1.5-5/topics/solve.QP |
Python | query = ynew # len(query)相当于R里面的length(query) reference = data[‘usage’] X2 = pd.DataFrame({‘ibofquery’: ynew, ‘one’: 1}) X2 = X2.as_matrix() P = np.dot(X2.T, X2) q = np.dot(reference.as_matrix().T, X2) * (-1) G = matrix([[-1.0, -1.0], [-1.0, 0.0]], tc=’d’) h = matrix([0.0, 0.0], tc=’d’) sol = solvers.qp(matrix(P), matrix(q), G=G, h=h) # ,solver=’mosek’ baseline = sol[‘x’][1] + sol[‘x’][0] * query # baseline是数组 |
解二次规划 | from cvxopt import matrix # conda install -c conda-forge cvxopt from cvxopt import solvers 使用说明:https://segmentfault.com/a/1190000004439482 注意大坑!有R的solve.QP的官方文档里二次规划的目标函数里正负号的定义都不一样,所以Python写参数的时候,和R的参数一模一样就错了! |
R | func_sigma <- match.fun(mad) func_sigma(data[[2L]]) |
计算mad中位数绝对偏差 | MAD(Median absolute deviation, 中位数绝对偏差)是单变量数据集中样本差异性的稳健度量。在ESD算法中用到了 |
Python | data_sigma = data[‘count’].mad() | 计算mad | pd.Series自带mad() |
R | t <- qt(p,(n-i-1L)) | 用来返回分位数 | https://www.quora.com/In-R-what-is-the-difference-between-dt-pt-and-qt-in-reference-to-the-student-t-distribution http://www.r-tutor.com/category/r-functions/qt |
Python | t = ttest.ppf(p, (n - i - 1)) # inverse CDF like qt() in R,p之所以写成[]形式是因为api要求 |
用来返回分位数 | from scipy.stats.distributions import t as ttest https://blog.csdn.net/m0_37777649/article/details/74938120 注:ttest包有很多函数,比如cdf(),pdf(),ppf()等等,我选择ppf是因为看了函数的API以后发现它和R的qt()所代表的inverse cdf代表的意义一致。 https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html |
usage = read.csv('data/TopmostPN_filter_weekly_consumption(change)/R31 1019 usage.csv',header = TRUE)
ib = read.csv('data/TopmostPN_ib_filter(change)/total topmost ib with new startpoint2.0.csv',header = FALSE)
usage = pd.read_csv('R31 1019 usage.csv')
ib = pd.read_csv('total topmost ib with new startpoint2.0.csv',header=None)
数据格式
usage
TopmostPN date usage
1 04W0433 01/01/2011 6
2 04W4459 01/01/2011 1
3 13N7096 01/01/2011 3
4 41W1479 01/01/2011 1
5 42T0152 01/01/2011 3
6 42T0476 01/01/2011 1
7 42W3769 01/01/2011 1
8 42X4880 01/01/2011 1
9 42X5067 01/01/2011 1
10 43N8359 01/01/2011 1
R里面的计数是从1开始的,没有0。所以2:3代表第2列(包括)和第3列(包括)
pn='ABCDEF'
PN_usage = usage[usage[[1]] == pn,2:3]
Python里面的计数是从0开始的,包含0。所以1:3代表第2列(包括)和第3列(包括)
pn='ABCDEF'
PN_usage = usage[usage['TopmostPN']==pn].iloc[:,1:3]#截取PN符合pn的date和usage
#mdy之前要library("lubridate")
PN_usage$date = mdy(PN_usage$date)#本来是月日年的形式,处理以后就变成了年月日
PN_usage = PN_usage[order(PN_usage$date),]#把PN_usage整个dateframe按日期升序排列
rownames(PN_usage) = 1:length(rownames(PN_usage))#相当于Python的df.reset_index()
PN_usage['date']=pd.to_datetime(PN_usage['date'],format='%m/%d/%Y')#把object类型转换为datetime64[ns]类型
PN_usage=PN_usage.sort_values(by='date')
PN_usage=PN_usage.reset_index()
R:
#以下这段数据预处理代表,如果不是所有相邻日期都相差7天,代表日期有丢失,就在最小日期和最大日期区间内用range分隔,
if(!(all(diff(PN_usage[[1]])==7))){ #判断条件的意思是:如果不是所有相邻日期都相差7天
#语法 :(默认)diff(x, lag = 1, differences = 1, …)
#若x是一个数值向量,则表示后一项减前一项,即滞后一阶差分;
time.min <- PN_usage[[1]][1]
time.max <- PN_usage[[1]][length(PN_usage[[1]])]
all.dates <- seq(time.min, time.max, by="week")#以时间最小为起点,时间最大为重点,周为间隔,生成时间序列
all.dates.frame <- data.frame(list(date=all.dates))
PN_usage <- merge(all.dates.frame, PN_usage, all=T)
PN_usage[[2]][which(is.na(PN_usage[[2]]))] <- 0
}
#PN_usage相当于一个时间段内以周为间隔,每周的消耗量数据
python:
if not all(PN_usage['interval']>7):
start = PN_usage['date'].min()
end = PN_usage['date'].max()
print("enter if")
sevendayscnt=int(((end-start).days)/7+1)
fulldate=[]
for i in range(0,sevendayscnt):
fulldate.append(start + datetime.timedelta(days=7* i) )
fulldatedf=pd.DataFrame(fulldate)
fulldatedf.columns=['date']
result = pd.merge(fulldatedf, PN_usage, on='date',how='outer')#默认是inner join
PN_usage=result[['date','usage']].fillna(0)
tt <- nrow(PN_usage)
tt=len(PN_usage)