数据还是使用信用卡的数据,数据来自于kaggle上的一个信用卡欺诈检测比赛,数据质量高,正负样本比例非常悬殊,很典型的异常检测数据集,在这个数据集上来测试一下各种异常检测手段的效果。当然,可能换个数据集结果就会有很大不同,结果仅供参考。
信用卡欺诈是指故意使用伪造、作废的信用卡,冒用他人的信用卡骗取财物,或用本人信用卡进行恶意透支的行为,信用卡欺诈形式分为3种:失卡冒用、假冒申请、伪造信用卡。欺诈案件中,有60%以上是伪造信用卡诈骗,其特点是团伙性质,从盗取卡资料、制造假卡、贩卖假卡,到用假卡作案,牟取暴利。而信用卡欺诈检测是银行减少损失的重要手段。
该数据集包含欧洲持卡人于 2013 年 9 月通过信用卡进行的交易信息。此数据集显示的是两天内发生的交易,在 284807 笔交易中,存在 492 起欺诈,数据集高度不平衡,正类(欺诈)仅占所有交易的 0.172%。原数据集已做脱敏处理和PCA处理,匿名变量V1, V2, …V28 是 PCA 获得的主成分,唯一未经过 PCA 处理的变量是 Time 和 Amount。Time 是每笔交易与数据集中第一笔交易之间的间隔,单位为秒;Amount 是交易金额。Class 是分类变量,在发生欺诈时为1,否则为0。项目要求根据现有数据集建立分类模型,对信用卡欺诈行为进行检测。
数据来源链接: https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud/
导入依赖包:
pip install pandas -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install seaborn -i https://pypi.tuna.tsinghua.edu.cn/simple
import warnings
warnings.filterwarnings("ignore")
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#plt.style.use('seaborn')
import tensorflow as tf
import seaborn as sns
from sklearn.model_selection import train_test_split
from keras.models import Model, load_model
from keras.layers import Input, Dense,LeakyReLU,BatchNormalization
from keras.callbacks import ModelCheckpoint
from keras import regularizers
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_curve, auc, precision_recall_curve
# 工作空间设置
os.chdir('/Users/xinghuatianying/projects/CreditCardFraudDetection')
os.getcwd()
# 读取数据
d = pd.read_csv('creditcard.csv')
# 查看样本比例
num_nonfraud = np.sum(d['Class'] == 0)
num_fraud = np.sum(d['Class'] == 1)
plt.bar(['Fraud', 'non-fraud'], [num_fraud, num_nonfraud], color='dodgerblue')
plt.show()
# 删除时间列,对Amount进行标准化
data = d.drop(['Time'], axis=1)
data['Amount'] = StandardScaler().fit_transform(data[['Amount']])
X = data.drop(['Class'],axis=1)
Y = data.Class
# 设置Autoencoder的参数
input_dim = X.shape[1] # 输入样本的维度
encoding_dim = 128 # 编码的维度
num_epoch = 3 #迭代次数
batch_size = 256 # 每batch的样本数
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim,
activation="tanh",
activity_regularizer=regularizers.l1(10e-5)
)(input_layer)
encoder =BatchNormalization()(encoder)
encoder=LeakyReLU(alpha=0.2)(encoder)
encoder = Dense(int(encoding_dim/2),
activation="relu"
)(encoder)
encoder =BatchNormalization()(encoder)
encoder=LeakyReLU(alpha=0.1)(encoder)
encoder = Dense(int(encoding_dim/4),
activation="relu"
)(encoder)
encoder =BatchNormalization()(encoder)
### decoder
decoder = LeakyReLU(alpha=0.1)(encoder)
decoder = Dense(int(encoding_dim/4),
activation='tanh'
)(decoder)
decoder = BatchNormalization()(decoder)
decoder = LeakyReLU(alpha=0.1)(decoder)
decoder = Dense(int(encoding_dim/2),
activation='tanh'
)(decoder)
decoder = BatchNormalization()(decoder)
decoder = LeakyReLU(alpha=0.1)(decoder)
decoder = Dense(input_dim,
#activation='relu'
)(decoder)
autoencoder = Model(inputs = input_layer,
outputs = decoder
)
autoencoder.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['mae','mse']
)
# 模型保存为 my_model.h5,并开始训练模型
checkpointer = ModelCheckpoint(filepath="/Users/xinghuatianying/projects/CreditCardFraudDetection/my_model.h5",
verbose=0,
save_best_only=True
)
history = autoencoder.fit(X,
X,
epochs=num_epoch,
batch_size=batch_size,
shuffle=True,
#validation_data=(X_test, X_test),
verbose=1,
callbacks=[checkpointer]
).history
autoencoder.save('/Users/xinghuatianying/projects/CreditCardFraudDetection/my_model.h5')
# 模型预测
autoencoder = load_model('/Users/xinghuatianying/projects/CreditCardFraudDetection/my_model.h5')
#利用训练好的autoencoder重建测试集
pred_X = autoencoder.predict(X)
# 计算还原误差MSE和MAE
mse_X = np.mean(np.power(X-pred_X,2), axis=1)
mae_X = np.mean(np.abs(X-pred_X), axis=1)
data['mse_X'] = mse_X
data['mae_X'] = mae_X
# TopN准确率评估
n = 1000
df = data.sort_values(by='mse_X',ascending=False)
df = df.head(n)
rate = df[df['Class']==1].shape[0]/n
print('Top{}的准确率为:{}'.format(n,rate))
运行结果:
Top500的准确率为:0.226, 欺诈样本的个数为:113