来源:
用机器学习构建模型,进行信用卡反欺诈预测
反欺诈中所用到的机器学习模型有哪些?
Credit card fraud detection
构建信用卡反欺诈预测模型——机器学习

信用卡交易数据相关知识收集
交易渠道、交易日期,商户名称/网点名称、卡号、交易币种、交易金额几个字段
刷卡供应商的信任度、插卡让购买行为(时空维度)、IP地址等等

信用贷款的不同维度

index, id, member_id, loan_amnt, funded_amnt, funded_amnt_inv, term, int_rate, installment, grade, sub_grade, emp_title, emp_length, home_ownership, annual_inc, verification_status, issue_d, loan_status, pymnt_plan, url, desc, purpose, title, zip_code, addr_state, dti, delinq_2yrs, earliest_cr_line, inq_last_6mths, mths_since_last_delinq, mths_since_last_record, open_acc, pub_rec, revol_bal, revol_util, total_acc, initial_list_status, out_prncp, out_prncp_inv, total_pymnt, total_pymnt_inv, total_rec_prncp, total_rec_int, total_rec_late_fee, recoveries, collection_recovery_fee, last_pymnt_d, last_pymnt_amnt, next_pymnt_d, last_credit_pull_d, collections_12_mths_ex_med, mths_since_last_major_derog, policy_code, application_type, annual_inc_joint, dti_joint, verification_status_joint, acc_now_delinq, tot_coll_amt, tot_cur_bal, open_acc_6m, open_il_6m, open_il_12m, open_il_24m, mths_since_rcnt_il, total_bal_il, il_util, open_rv_12m, open_rv_24m, max_bal_bc, all_util, total_rev_hi_lim, inq_fi, total_cu_tl, inq_last_12m

项目背景

数据集包含由欧洲持卡人于2013年9月使用信用卡进行交的数据。此数据集显示两天内发生的交易,其中284,807笔交易中有492笔被盗刷。数据集非常不平衡,积极的类(被盗刷)占所有交易的0.172%。

它只包含作为PCA转换结果的数字输入变量。不幸的是,由于保密问题,我们无法提供有关数据的原始功能和更多背景信息。特征V1,V2,… V28是使用PCA获得的主要组件,没有用PCA转换的唯一特征是“时间”和“量”。特征’时间’包含数据集中每个事务和第一个事务之间经过的秒数。特征“金额”是交易金额,此特征可用于实例依赖的成本认知学习。特征’类’是响应变量,如果发生被盗刷,则取值1,否则为0。

加载数据

data = pd.read_csv("../input/creditcard.csv")
print(data.head())
Time V1 V2 V3 V4 V5 V6 V7 V8 V9 V21 V22 V23 V24 V25 V26 V27 V28 Amount Class
0 0.0 -1.359807 -0.072781 2.536347 1.378155 -0.338321 0.462388 0.239599 0.098698 0.363787 -0.018307 0.277838 -0.110474 0.066928 0.128539 -0.189115 0.133558 -0.021053 149.62 0
1 0.0 1.191857 0.266151 0.166480 0.448154 0.060018 -0.082361 -0.078803 0.085102 -0.255425 -0.225775 -0.638672 0.101288 -0.339846 0.167170 0.125895 -0.008983 0.014724 2.69 0
2 1.0 -1.358354 -1.340163 1.773209 0.379780 -0.503198 1.800499 0.791461 0.247676 -1.514654 0.247998 0.771679 0.909412 -0.689281 -0.327642 -0.139097 -0.055353 -0.059752 378.66 0
3 1.0 -0.966272 -0.185226 1.792993 -0.863291 -0.010309 1.247203 0.237609 0.377436 -1.387024 -0.108300 0.005274 -0.190321 -1.175575 0.647376 -0.221929 0.062723 0.061458 123.50 0
4 2.0 -1.158233 0.877737 1.548718 0.403034 -0.407193 0.095921 0.592941 -0.270533 0.817739 -0.009431 0.798278 -0.137458 0.141267 -0.206010 0.502292 0.219422 0.215153 69.99 0

查看数据标签分布

count_classes = pd.value_counts(data['Class'], sort = True).sort_index()
count_classes.plot(kind = 'bar')
print(count_classes)
plt.title("Fraud class histogram")
plt.xlabel("Class")
plt.ylabel("Frequency")
plt.show()

结果如下:

0    284315
1       492
Name: Class, dtype: int64

从上可以看到正负样本极不平衡,如果全样本训练,标签为1的样本就被淹没了;结果是,标签为0的样本容易套上模型,而标签为1的样本不容易套上模型;导致在交叉验证预测分类时,整个结果的精度看起来依然很高,但容易把真实标签为1的样本预测错。

对于不平衡样本的解决方案:

  • Collect more data? Nice strategy but not applicable in this case
  • Changing the performance metric:
    • Use the confusio nmatrix to calculate Precision, Recall
    • F1score (weighted average of precision recall)
    • Use Kappa - which is a classification accuracy normalized by the imbalance of the * classes in the data
    • ROC curves - calculates sensitivity/specificity ratio.
  • Resampling the dataset
    • Essentially this is a method that will process the data to have an approximate 50-50 ratio.
    • One way to achieve this is by OVER-sampling, which is adding copies of the under-represented class (better when you have little data)
    • Another is UNDER-sampling, which deletes instances from the over-represented class (better when he have lot’s of data)

预采用的整体方案

  • We are not going to perform feature engineering in first instance. The dataset has been downgraded in order to contain 30 features (28 anonamised + time + amount).
  • We will then compare what happens when using resampling and when not using it. We will test this approach using a simple logistic regression classifier.
  • We will evaluate the models by using some of the performance metrics mentioned above.
  • We will repeat the best resampling/not resampling method, by tuning the parameters in the logistic regression classifier.
  • We will finally perform classifications model using other classification algorithms.

特征值处理和重采样

from sklearn.preprocessing import StandardScalerdata['normAmount'] = StandardScaler().fit_transform(data['Amount'].reshape(-1, 1))
data = data.drop(['Time','Amount'],axis=1)
# print(data.head())X = data.ix[:, data.columns != 'Class']
y = data.ix[:, data.columns == 'Class']# Number of data points in the minority class
number_records_fraud = len(data[data.Class == 1])
fraud_indices = np.array(data[data.Class == 1].index)# Picking the indices of the normal classes
normal_indices = data[data.Class == 0].index# Out of the indices we picked, randomly select "x" number (number_records_fraud)
random_normal_indices = np.random.choice(normal_indices, number_records_fraud, replace = False)
random_normal_indices = np.array(random_normal_indices)# Appending the 2 indices
under_sample_indices = np.concatenate([fraud_indices,random_normal_indices])# Under sample dataset
under_sample_data = data.iloc[under_sample_indices,:]X_undersample = under_sample_data.ix[:, under_sample_data.columns != 'Class']
y_undersample = under_sample_data.ix[:, under_sample_data.columns == 'Class']# Showing ratio
print("Percentage of normal transactions: ", len(under_sample_data[under_sample_data.Class == 0])/len(under_sample_data))
print("Percentage of fraud transactions: ", len(under_sample_data[under_sample_data.Class == 1])/len(under_sample_data))
print("Total number of transactions in resampled data: ", len(under_sample_data))

结果如下:

Percentage of normal transactions:  0.5
Percentage of fraud transactions:  0.5
Total number of transactions in resampled data:  984

拆分数据为训练集和测试集

为了交叉验证的需要

from sklearn.model_selection import train_test_split# Whole dataset
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3, random_state = 0)print("Number transactions train dataset: ", len(X_train))
print("Number transactions test dataset: ", len(X_test))
print("Total number of transactions: ", len(X_train)+len(X_test))# Undersampled dataset
X_train_undersample, X_test_undersample, y_train_undersample, y_test_undersample = train_test_split(X_undersample,y_undersample,test_size = 0.3,random_state = 0)
print("")
print("Number transactions train dataset: ", len(X_train_undersample))
print("Number transactions test dataset: ", len(X_test_undersample))
print("Total number of transactions: ", len(X_train_undersample)+len(X_test_undersample))

结果如下:

Number transactions train dataset:  199364
Number transactions test dataset:  85443
Total number of transactions:  284807Number transactions train dataset:  688
Number transactions test dataset:  296
Total number of transactions:  984

逻辑回归分类(在重采样数据集上)

  • Accuracy = (TP+TN)/total
  • Precision = TP/(TP+FP)
  • Recall = TP/(TP+FN)
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation  import KFold, cross_val_score
from sklearn.metrics import confusion_matrix,precision_recall_curve,auc,roc_auc_score,roc_curve,recall_score,classification_reportdef printing_Kfold_scores(x_train_data, y_train_data):fold = KFold(len(y_train_data), 5, shuffle=False)# Different C parametersc_param_range = [0.01, 0.1, 1, 10, 100]results_table = pd.DataFrame(index=range(len(c_param_range), 2), columns=['C_parameter', 'Mean recall score'])results_table['C_parameter'] = c_param_range# the k-fold will give 2 lists: train_indices = indices[0], test_indices = indices[1]j = 0for c_param in c_param_range:print('-------------------------------------------')print('C parameter: ', c_param)print('-------------------------------------------')print('')recall_accs = []for iteration, indices in enumerate(fold, start=1):# Call the logistic regression model with a certain C parameterlr = LogisticRegression(C=c_param, penalty='l1')# Use the training data to fit the model. In this case, we use the portion of the fold to train the model# with indices[0]. We then predict on the portion assigned as the 'test cross validation' with indices[1]lr.fit(x_train_data.iloc[indices[0], :], y_train_data.iloc[indices[0], :].values.ravel())# Predict values using the test indices in the training datay_pred_undersample = lr.predict(x_train_data.iloc[indices[1], :].values)# Calculate the recall score and append it to a list for recall scores representing the current c_parameterrecall_acc = recall_score(y_train_data.iloc[indices[1], :].values, y_pred_undersample)recall_accs.append(recall_acc)print('Iteration ', iteration, ': recall score = ', recall_acc)# The mean value of those recall scores is the metric we want to save and get hold of.results_table.ix[j, 'Mean recall score'] = np.mean(recall_accs)j += 1print('')print('Mean recall score ', np.mean(recall_accs))print('')best_c = results_table.loc[results_table['Mean recall score'].idxmax()]['C_parameter']# Finally, we can check which C parameter is the best amongst the chosen.print('*********************************************************************************')print('Best model to choose from cross validation is with C parameter = ', best_c)print('*********************************************************************************')return best_cbest_c = printing_Kfold_scores(X_train_undersample,y_train_undersample)

结果如下:

-------------------------------------------
C parameter:  0.01
-------------------------------------------Iteration  1 : recall score =  0.931506849315
Iteration  2 : recall score =  0.931506849315
Iteration  3 : recall score =  1.0
Iteration  4 : recall score =  0.972972972973
Iteration  5 : recall score =  0.969696969697Mean recall score  0.96113672826-------------------------------------------
C parameter:  0.1
-------------------------------------------Iteration  1 : recall score =  0.849315068493
Iteration  2 : recall score =  0.86301369863
Iteration  3 : recall score =  0.949152542373
Iteration  4 : recall score =  0.932432432432
Iteration  5 : recall score =  0.909090909091Mean recall score  0.900600930204-------------------------------------------
C parameter:  1
-------------------------------------------Iteration  1 : recall score =  0.849315068493
Iteration  2 : recall score =  0.890410958904
Iteration  3 : recall score =  0.983050847458
Iteration  4 : recall score =  0.932432432432
Iteration  5 : recall score =  0.924242424242Mean recall score  0.915890346306-------------------------------------------
C parameter:  10
-------------------------------------------Iteration  1 : recall score =  0.86301369863
Iteration  2 : recall score =  0.876712328767
Iteration  3 : recall score =  0.983050847458
Iteration  4 : recall score =  0.932432432432
Iteration  5 : recall score =  0.909090909091Mean recall score  0.912860043276-------------------------------------------
C parameter:  100
-------------------------------------------Iteration  1 : recall score =  0.849315068493
Iteration  2 : recall score =  0.876712328767
Iteration  3 : recall score =  0.983050847458
Iteration  4 : recall score =  0.932432432432
Iteration  5 : recall score =  0.909090909091Mean recall score  0.910120317248*********************************************************************************
Best model to choose from cross validation is with C parameter =  0.01
*********************************************************************************

混淆函数的可视化函数

import itertoolsdef plot_confusion_matrix(cm, classes,normalize=False,title='Confusion matrix',cmap=plt.cm.Blues):"""This function prints and plots the confusion matrix.Normalization can be applied by setting `normalize=True`."""plt.imshow(cm, interpolation='nearest', cmap=cmap)plt.title(title)plt.colorbar()tick_marks = np.arange(len(classes))plt.xticks(tick_marks, classes, rotation=0)plt.yticks(tick_marks, classes)if normalize:cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]#print("Normalized confusion matrix")else:1#print('Confusion matrix, without normalization')#print(cm)thresh = cm.max() / 2.for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):plt.text(j, i, cm[i, j],horizontalalignment="center",color="white" if cm[i, j] > thresh else "black")plt.tight_layout()plt.ylabel('True label')plt.xlabel('Predicted label')

在测试集上进行类别预测并计算混淆函数

项目实例---金融---用机器学习构建模型,进行信用卡反欺诈预测相关推荐

  1. 阿里天池金融数据分析赛题2:保险反欺诈预测baseline

    金融数据分析赛题2:保险反欺诈预测baseline 好久没写baseline了,最近逛比赛的时候突然看到阿里新人赛又出新题目了,索性写个baseline给初学者,昨天晚上把比赛数据下载了,然后随便跑了 ...

  2. 阿里天池---教学赛】金融数据分析赛题2:保险反欺诈预测

    目录 赛题背景 赛题数据 赛题任务 一:操作指南 二:数据预处理 赛题背景 赛题以保险风控为背景,保险是重要的金融体系,对社会发展,民生保障起到重要作用.保险欺诈近些年层出不穷,在某些险种上保险欺诈的 ...

  3. 构建信用卡反欺诈预测模型——机器学习

    本项目需解决的问题 本项目通过利用信用卡的历史交易数据,进行机器学习,构建信用卡反欺诈预测模型,提前发现客户信用卡被盗刷的事件. 建模思路 项目背景 数据集包含由欧洲持卡人于2013年9月使用信用卡进 ...

  4. 反欺诈概念库-信用卡反欺诈管理

    原文:http://www.cnki.com.cn/Article/CJFDTotal-XYKZ200508004.htm 2005年6月,美国爆出4000万张信用卡资料外泄的特大新闻.消息传来,舆论 ...

  5. 兴业银行与第四范式开启AI平台加速模式 毫秒级信用卡反欺诈系统上线

    近日,基于第四范式为兴业银行打造的低门槛.自动化的人工智能平台,兴业银行信用卡中心推出了毫秒级智能交易反欺诈系统,实现了对信用卡欺诈风险自动化.智能化.精确化的甄别与管控,为信用卡用户提供最安全.可信 ...

  6. Python分析信用卡反欺诈

    本文研究的是大数据量(284807条数据)下模型选择的问题,也参考了一些文献,但大多不够清晰,因此吐血整理本文,希望对大家有帮助; 本文试着从数据分析师的角度,设想"拿到数据该如何寻找规律. ...

  7. 反欺诈中所用到的机器学习模型有哪些?

    作者 | 微调(知乎ID微调,普华永道高级数据科学家) 反欺诈方向的实际应用很多,我有做过保险业反欺诈和零售快消业的欺诈检测,抛砖引玉的谈谈反欺诈项目的"道"和"术&qu ...

  8. 【待继续研究】如何运用机器学习技术构建可行的反欺诈检测方案?

    反欺诈方向的实际应用很多,我有做过保险业反欺诈和零售快消业的欺诈检测,抛砖引玉的谈谈反欺诈项目的"道"和"术". 1.背景 - 为什么反欺诈检测难度很高? 反欺 ...

  9. 机器学习:04 Kaggle 信用卡欺诈

    文章目录 前期准备 目标 数据集介绍 建模思路 场景分析 数据预处理 导入库 加载数据 数据分析 正负样本分布 信用卡正常与被盗刷用户分析 是否欺诈和交易金额关系分析 消费和时间关系分析 V1-V28 ...

最新文章

  1. 阮一峰老师的ES6入门:async 函数
  2. 测试机房质量之上传下载速率测试
  3. 2018年度自动机器学习框架盘点
  4. 《程序员代码面试指南》第二章 链表问题 构造链表和节点的实体
  5. mysql 硬负载_为啥单机MySQL又遭遇瓶颈?MySQL主从复制替你解决单机问题
  6. Servlet 和Filter的生命周期
  7. sudo详细介绍...
  8. 【HTML】使用css3和html给网站添加上春节灯笼特效
  9. 首次使用Gradle配置本地仓库和更好国内镜像源
  10. Windows phone8 基础篇(二) xaml介绍 一
  11. 网站备案其实是服务器备案,网站备案指的是备案域名还是备案主机空间
  12. 打开360浏览器显示无法连接服务器错误,最近360浏览器老是无法打开网页,提示错误如图,但是只要刷新就可以打开了,这是怎么回事?...
  13. 基于深度学习的商品检索技术
  14. ansys18安装以后打不开_ansys18.0安装过程及常见问题解决方案【图文】
  15. 显示器间歇性黑屏问题排查
  16. 大促活动如何抵御大流量 DDoS 攻击?
  17. 语义分析(Semantic Parsing)调研
  18. 数据库-Elasticsearch进阶学习笔记(分片、映射、分词器、即时搜索、全文搜索等)
  19. CTF学习-逆向解题思路
  20. 蒋晓海:Testin云测,如何让应用更有价值

热门文章

  1. 2023年重庆大学工商管理考研上岸前辈备考经验指导
  2. 【RDMA】RDMA 学习资料总目录
  3. 内部信告别!阿里云首席科学家闵万里离职:中科大少年班天才,曾率队打造ET大脑...
  4. java中异常处理中的异常匹配
  5. 目标检测+目标追踪+单目测距(毕设+代码)
  6. word2003中,格式刷有快捷键吗
  7. 上海高校计算机就业排名,最新| 上海高校专业薪酬排行榜!
  8. Android开发学习中的问题2016-5-03手动创建活动
  9. Vue Router详细教程
  10. 【超好懂的比赛题解】“山大地纬杯”第十二届山东省ICPC大学生程序设计竞赛(正式赛)