这个单子难点在神经网络交叉验证,之前只会机器学习的交叉验证,借鉴一个微信文章才做出来的,文档链接https://mp.weixin.qq.com/s?__biz=MzA4OTg5NzY3NA==&mid=2649345834&idx=1&sn=3c748d2d3c0ac89395da25a07a75cefa&chksm=880e808fbf7909999e2775254dc6ac0b02fd4fc582977a4de640b6249d838725d6fedf00aead&mpshare=1&scene=23&srcid=1211ZPzaGxnuVALs2XUXGnMx&sharer_sharetime=1576057502530&sharer_shareid=d4f40f7a25def68e84cbad465c76535f#rd

import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
test=pd.read_csv('CMP3751M_CMP9772M_ML_Assignment 2-dataset-nuclear_plants_final.csv')
test.head()
Status Power_range_sensor_1 Power_range_sensor_2 Power_range_sensor_3 Power_range_sensor_4 Pressure _sensor_1 Pressure _sensor_2 Pressure _sensor_3 Pressure _sensor_4 Vibration_sensor_1 Vibration_sensor_2 Vibration_sensor_3 Vibration_sensor_4
0 Normal 4.5044 0.7443 6.3400 1.9052 29.5315 0.8647 2.2044 6.0480 14.4659 21.6480 15.3429 1.2186
1 Normal 4.4284 0.9073 5.6433 1.6232 27.5032 1.4704 1.9929 5.9856 20.8356 0.0646 14.8813 7.3483
2 Normal 4.5291 1.0199 6.1130 1.0565 26.4271 1.9247 1.9420 6.7162 5.3358 11.0779 25.0914 9.2408
3 Normal 5.1727 1.0007 7.8589 0.2765 25.1576 2.6090 2.9234 6.7485 1.9017 1.8463 28.6640 4.0157
4 Normal 5.2258 0.6125 7.9504 0.1547 24.0765 3.2113 4.4563 5.8411 0.5077 9.3700 34.8122 13.4966
test.describe()
Power_range_sensor_1 Power_range_sensor_2 Power_range_sensor_3 Power_range_sensor_4 Pressure _sensor_1 Pressure _sensor_2 Pressure _sensor_3 Pressure _sensor_4 Vibration_sensor_1 Vibration_sensor_2 Vibration_sensor_3 Vibration_sensor_4
count 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000 996.000000
mean 4.999574 6.379273 9.228112 7.355272 14.199127 3.077958 5.749234 4.997002 8.164563 10.001593 15.187982 9.933591
std 2.764856 2.312569 2.532173 4.354778 11.680045 2.126091 2.526136 4.165490 6.173261 7.336233 12.159625 7.282383
min 0.008200 0.040300 2.583966 0.062300 0.024800 0.008262 0.001224 0.005800 0.000000 0.018500 0.064600 0.009200
25% 2.892120 4.931750 7.511400 3.438141 5.014875 1.415800 4.022800 1.581625 3.190292 4.004200 5.508900 3.842675
50% 4.881100 6.470500 9.348000 7.071550 11.716802 2.672400 5.741357 3.859200 6.752900 8.793050 12.185650 8.853050
75% 6.794557 8.104500 11.046800 10.917400 20.280250 4.502500 7.503578 7.599900 11.253300 14.684055 21.835000 14.357400
max 12.129800 11.928400 15.759900 17.235858 67.979400 10.242738 12.647500 16.555620 36.186438 34.867600 53.238400 43.231400
test.isnull().sum()
Status                   0
Power_range_sensor_1     0
Power_range_sensor_2     0
Power_range_sensor_3     0
Power_range_sensor_4     0
Pressure _sensor_1       0
Pressure _sensor_2       0
Pressure _sensor_3       0
Pressure _sensor_4       0
Vibration_sensor_1       0
Vibration_sensor_2       0
Vibration_sensor_3       0
Vibration_sensor_4       0
dtype: int64
def function(a):if 'Normal'in a :return 1else:return 0
test['Status'] = test.apply(lambda x: function(x['Status']), axis = 1)
from sklearn.preprocessing import StandardScaler
x = test.drop(['Status'],axis=1)
y=test['Status']
X_scaler = StandardScaler()
x = X_scaler.fit_transform(x)
print('data shape: {0}; no. positive: {1}; no. negative: {2}'.format(x.shape, y[y==1].shape[0], y[y==0].shape[0]))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
data shape: (996, 12); no. positive: 498; no. negative: 498
import keras
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
Using Theano backend.
WARNING (theano.configdefaults): g++ not available, if using conda: `conda install m2w64-toolchain`
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
for i in [50,500,1000]:classifier = Sequential()classifier.add(Dense(i, kernel_initializer = 'uniform',activation ='sigmoid', input_dim=12))classifier.add(Dense(1, kernel_initializer = 'uniform',activation ='sigmoid'))classifier.compile(optimizer= 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])classifier.fit(X_train, y_train, batch_size = 700, epochs = 1)print(' nerve cell{} accuracy rate '.format(i))
Epoch 1/1
796/796 [==============================] - 2s 2ms/step - loss: 0.6935 - accuracy: 0.5113nerve cell50 accuracy rate
Epoch 1/1
796/796 [==============================] - 14s 17ms/step - loss: 0.6934 - accuracy: 0.5113nerve cell500 accuracy rate
Epoch 1/1
796/796 [==============================] - 28s 35ms/step - loss: 0.6924 - accuracy: 0.4975nerve cell1000 accuracy rate
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
def make_classifier():classifier = Sequential()classiifier.add(Dense(3, kernel_initializer = 'uniform', activation = 'relu', input_dim=12))classiifier.add(Dense(3, kernel_initializer = 'uniform', activation = 'relu'))classifier.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))classifier.compile(optimizer= 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])return classifier
classiifier = KerasClassifier(build_fn = make_classifier,batch_size=100, nb_epoch=300)
accuracies = cross_val_score(estimator = classiifier,X = X_train,y = y_train,cv = 10,n_jobs = -1)
mean = accuracies.mean()
import keras
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense
for i in [50,500,1000]:classifier = Sequential()classiifier.add(Dense(i,kernel_initializer = 'uniform', activation = 'relu', input_dim=12))classifier.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))classifier.compile(optimizer= 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])classifier.fit(X_train, y_train, batch_size = 100, epochs = 300)print(' nerve cell{} accuracy rate '.format(i))
---------------------------------------------------------------------------NameError                                 Traceback (most recent call last)<ipython-input-14-f1690d4285f8> in <module>()6 for i in [50,500,1000]:7     classifier = Sequential()
----> 8     classiifier.add(Dense(i,kernel_initializer = 'uniform', activation = 'relu', input_dim=12))9     classifier.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))10     classifier.compile(optimizer= 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])NameError: name 'classiifier' is not defined

20191208_神经网络交叉验证相关推荐

  1. 基于MATLAB的BR神经网络交叉验证实践

    基于MATLAB的BR神经网络交叉验证实践 在做项目需要对BR神经网络进行K-Flod交叉验证并最终得到结果,于是写了这个代码. // Author LJS clearload allData.mat ...

  2. Keras训练神经网络进行分类并进行交叉验证(Cross Validation)

    Keras训练神经网络进行分类并进行交叉验证(Cross Validation) 交叉验证是在机器学习建立模型和验证模型参数时常用的办法.交叉验证,顾名思义,就是重复的使用数据,把得到的样本数据进行切 ...

  3. 神经网络与机器学习 笔记—泛化和交叉验证

    泛化和交叉验证 泛化 在反向传播学习中,一般从一个训练样本开始,而且通过网络中装载尽可能多的训练样本来使用反向传播算法计算一个多层感知器的突触权值.希望这样设计的神经网络可以很好的泛化.对于从未在生成 ...

  4. 神经网络训练之交叉验证

    本文转自  http://blog.csdn.net/acdreamers/article/details/44663891,感谢原作者的付出和分享. 今天来讲一种在机器学习中常用的精度测试方法, ...

  5. 【深度学习】(7) 交叉验证、正则化,自定义网络案例:图片分类,附python完整代码

    各位同学好,今天和大家分享一下TensorFlow2.0深度学习中的交叉验证法和正则化方法,最后展示一下自定义网络的小案例. 1. 交叉验证 交叉验证主要防止模型过于复杂而引起的过拟合,找到使模型泛化 ...

  6. 训练数据集如何划分验证测试集?train/test(val/dev) set和交叉验证(cross validation)

    普通train/test set 直接将训练数据划分为两部分,一部分用来做训练train set,一部分用来固定作为测试集test set.然后反复更换超参在训练集上进行训练,使用测试集依次测试,进行 ...

  7. 遭遇棘手 交接_Librosa的城市声音分类-棘手的交叉验证

    遭遇棘手 交接 大纲 (Outline) The goal of this post is two-fold: 这篇文章的目标有两个: I'll show an example of implemen ...

  8. 交叉验证和超参数调整:如何优化您的机器学习模型

    In the first two parts of this article I obtained and preprocessed Fitbit sleep data, split the data ...

  9. 交叉验证python_交叉验证

    交叉验证python Cross validation may be any of various model validation techniques that are used to asses ...

  10. 深度学习-超参数和交叉验证

    一. 1.什么是超参数 没接触过机器学习的人可能对这个概念比较模糊.我们可以从两方面来理解 (1)参数值的产生由来 超参数是在开始学习过程之前设置值的参数(人为设置),而不是通过训练得到的参数数据. ...

最新文章

  1. 2015第22周六Java反射、泛型、容器简介
  2. Nginx配置——搭建 Nginx 高可用集群(双机热备)
  3. python 自定义函数和循环_Python循环语句——对for循环和while循环应用自定义函数公式的实践,套用,练习...
  4. MVCC在MySQL的InnoDB中的实现
  5. HTML5增加的几个新的标签
  6. circlegan_CycleGAN原理以及代码全解析
  7. 马斯克又吊大家胃口:9月22日电池日有众多亮点揭晓
  8. java.lang.Math类的API介绍
  9. 第2章[2.5] Ext JS组件、容器与布局
  10. 算法优化策略之“中途相遇”算法思想
  11. 群晖NAS从入门到精通的所有帖子汇总,只要这一篇就够了
  12. SOME/IP报文格式-Request ID
  13. torch tensor去掉1维_代数拓扑笔记(1) —— 胞腔复形
  14. 为微信小程序扩展自定义babel编译功能
  15. HTML5自动换行的间距设置,div css p段落行高行距怎么设置篇
  16. 判断二极管导通例题_几种二极管的检测方法(普通,稳压,双向触发)
  17. 粉丝福利 | 秒 get 支付宝同款扫码组件
  18. 一起学java-韩顺平老师
  19. 对话ACE第五期:到底什么才是真正的HTAP?
  20. 使用case when,union all实现sql行转列、列转行

热门文章

  1. 实现简单的Console
  2. SQL 和T-SQL学习(一)
  3. 不得不收藏的大数据Hadoop教程:Hadoop集群搭建
  4. js如何在字符串里加变量
  5. go学习笔记-包处理
  6. 处理一些常见的跨浏览器封装的函数
  7. 北京理工计算机 上机复试2000年
  8. JS+dom简单运动实现
  9. Karma 5:集成 Karma 和 Angular2
  10. Admob(6.12.x)符号未定义错误的解决方法(IOS)