写在前面
该篇主要讲述的是数据分析的代码实现:
包含:
Linear Regression
Logistic_Regression
Support Vector Machine
Convolution Neural Network

Linear Regression

import tensorflow as tf
import numpy as npdef read_data():"""读取数据"""passX = tf.placeholder(tf.float32, shape=[3,1], name="X")
Y = tf.placeholder(tf.float32, name="Y")w = tf.Variable(tf.random_normal(shape[3,1], dtype=tf.float32), name="weight")
b = tf.Variable(.0, dtype=tf.float32, name="bias")y_predict = tf.matmul(X, w) + b
loss = tf.square(Y - Y_predicted, name='loss')
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001).minimize(loss)with tf.Session() as sess:sess.run(tf.global_variables_initializer()) for i in range(epochs):total_loss = 0for j in range(len(data)):"""read data"""x_ = read_x_data()y_ = read_y_data()"""add some functions to get the precise data of x and y"""_, loss_ = sess.run([optimizer, loss], feed_dict={X:x_, Y:y_})total_loss += loss_w_, b_ = sess.run([w,b])

Logits Regression

import padas # mabey useful
import tensorflow as tf
import numpy as npcsv_path = "./data/haberman.csv"
data = pandas.read_csv(csv_path).to_numpy()X = tf.placeholder(tf.float32, shape=[1,3], name="X")
Y = tf.placeholder(tf.float32, name="Y")mizew = tf.Variable(tf.random_normal(shape=[3,1], dtype=tf.float32), name="weight")
b = tf.Variable(.0, dtype=tf.float32 ,name="bias")logits = tf.matmul(X , w)+ b
entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=Y, name="sigmoid")
loss = tf.reduce_mean(entropy)optimizer = tf.train.AdamOptimizer(learning_rate=.01).minimize(loss)#Train
with tf.Session() as sess:sess.run(tf.global_variables_initializer())epoches = 10for n in range(epoches):total_loss = 0for i in range(int(len(data) * .8)):x_data = np.array(data[i][:3]).reshape((1, 3))y_data = float(data[i][3:] - 1)# print([x_data, y_data])_, loss_ = sess.run([optimizer, loss], feed_dict={X: x_data, Y: y_data})total_loss += loss_print("total loss : {0}".format(total_loss/ int(len(data) * .8)))w, b = sess.run([w, b])print([w,b])# Test
with tf.Session() as sess:print("start test!")Accurency = 0for i in range(int(len(data)*.8), len(data)):x_data = np.array(data[i][:3]).reshape((1, 3))y_data = float(data[i][3:] - 1)value = tf.nn.sigmoid(logits)sess.run(tf.global_variables_initializer())# print([sess.run(value,feed_dict={X:x_data, Y: y_data})[0][0],y_data])Accurency += sess.run(tf.cast(tf.equal(sess.run(value, feed_dict={X:x_data, Y: y_data})[0][0], y_data), tf.float32))print("Accurency : {0}%".format(Accurency/int(len(data)*.2) * 100))print("Done!")

SVM

import numpy as np
from sklearn import svmpath_file = "./Image_Files/fer2013.csv"def data_get(type):"""param type:Training or Test"""pass
train_dataX, train_dataY = data_get("Training")clf = svm.SVC()
clf.fit(train_dataX, train_dataY)
print(clf.score(trai_dataX, train_dataY))

Convolution Neural Network

import tensorflow as tf
import numpy as np
from PIL import ImageX = tf.placeholder(tf.float32, [28, 28], name="X_placeholder")
Y = tf.placeholder(tf.float32, [10], name="Y_placeholder")# cnn
images = tf.reshape(X, shape=[-1, 28, 28, 1])
kernel = tf.get_variable('kernel', [5,5,1,16], initializer=tf.truncated_normal_initializer())biases = tf.get_variable('bias', [16], initializer=tf.random_normal_initializer())conv = tf.nn.conv2d(images, kernel, strides=[1,1,1,1], padding="SAME")
conv = tf.nn.relu(conv + biases, name="conv")# pool
pool = tf.nn.max_pool(conv, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
pool = tf.reshape(pool, [-1,14*14*16])# linear processing
w1 = tf.get_variable('weights', [14*14*16, 1024], initializer=tf.truncated_normal_initializer())
b1 = tf.get_variable('biases', [1024], initializer=tf.random_normal_initializer())f = tf.nn.relu(tf.matmul(pool, w1) + b1, name="relu")
# dropout
f = tf.nn.dropout(f, .75, name='relu_dropout')# linear processing
w = tf.get_variable('weights_', [1024, 10], initializer=tf.truncated_normal_initializer())
b = tf.get_variable('bias_', [10], initializer=tf.random_normal_initializer())# logits regression
logits = tf.matmul(f, w) + bentropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name="entropy")
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=.001).minimize(loss)# get datas
train_data_images = load_train_images()
train_data_label = load_train_labels()
test_data_images = load_test_images()
test_data_label = load_test_labels()with tf.Session() as sess:sess.run(tf.global_variables_initializer())# Traintotal_loss = 0for i in range(len(train_data_images)):x_data = train_data_images[i]y_data = train_data_label[i]_, loss_ = sess.run([optimizer, loss], feed_dict={X: x_data, Y: y_data})total_loss += loss_print("total loss : {0}".format(total_loss))# Testtotal_loss = 0w, b = sess.run([w,b])for i in range(len(test_data_images)):x_data = train_data_images[i]y_data = train_data_label[i]softmax_ = tf.nn.softmax(logits)pred = tf.equal(tf.argmax(softmax_, 1), tf.argmax(Y,1))total_loss += sess.run(pred, feed_dict={X: x_data, Y: y_data})print("total loss : {0}".format(total_loss))

Data Analysis相关推荐

  1. R语言统计入门课程推荐——生物科学中的数据分析Data Analysis for the Life Sciences

    Data Analysis for the Life Sciences是哈佛大学PH525x系列课程--生物医学中的数据分析(PH525x series - Biomedical Data Scien ...

  2. R语言explore包进行探索性数据分析实战(EDA、exploratory data analysis):基于iris数据集

    R语言explore包进行探索性数据分析实战(EDA.exploratory data analysis):基于iris数据集 目录

  3. R探索新数据分析(Exploratory Data Analysis,EDA)

    R探索新数据分析(Exploratory Data Analysis,EDA) 目录 R探索新数据分析(Exploratory Data Analysis,EDA) str方法进行数据概览及类型查看

  4. python进行探索性数据分析EDA(Exploratory Data Analysis)分析

    python进行探索性数据分析EDA(Exploratory Data Analysis)分析 show holy respect to python community, for there ded ...

  5. 【Python-ML】探索式数据分析EDA(Exploratory Data Analysis)

    # -*- coding: utf-8 -*- ''' Created on 2018年1月24日 @author: Jason.F @summary: 有监督回归学习-探索式数据分析(EDA,Exp ...

  6. 数据分析---《Python for Data Analysis》学习笔记【04】

    <Python for Data Analysis>一书由Wes Mckinney所著,中文译名是<利用Python进行数据分析>.这里记录一下学习过程,其中有些方法和书中不同 ...

  7. Python for Data Analysis

    本文只是一篇类似导向性的分享, 并没有原创内容, 主要是书籍和网络资源的整理, 仅供参考. 可能会有后续补充更新. 资源 A Byte of Python 这是给没有使用过 Python 的人员的入门 ...

  8. 转录组分析综述A survey of best practices for RNA-seq data analysis

    转录组分析综述 转录组 文献解读 Trinity cufflinks 转录组研究综述文章解读 今天介绍下小编最近阅读的关于RNA-seq分析的文章,文章发在Genome Biology 上的A sur ...

  9. IBM Machine Learning学习笔记(一)——Exploratory Data Analysis for Machine Learning

    数据的探索性分析 1. 读入数据 (1)csv文件读取 (2)json文件读取 (3)SQL数据库读取 (4)Not-only SQL (NoSQL)读取 (5)从网络中获取 2. 数据清洗 (1)缺 ...

  10. 应用定性数据分析包RQDA(Qualitative Data Analysis)和文挖掘框架包tm结合进行文本挖掘

    http://cos.name/cn/topic/102130 应用定性数据分析包RQDA(Qualitative Data Analysis)和文挖掘框架包tm结合进行文本挖掘. 在对访谈内容或剧本 ...

最新文章

  1. 动态生成一个继承接口的类
  2. 配置redis禁用几个危险命令
  3. 基于Leaflet和GraphHopper实现离线路径规划
  4. 移动搜索引擎-网页信息预处理
  5. Hollis原创|深入分析Java的编译原理
  6. 删除一行下方单元格上移_Excel小技巧——局部单元格的添加与删除
  7. 七日Python之路--第八天(一些琐碎)
  8. 让IIS7支持SSI功能(用来支持shtml)的方法
  9. [导入]意外的,博客被点名了。参与游戏吧。
  10. adsl拨号php,Linux_Linux系统创建ADSL拨号上网方法介绍,在使用linux创建adsl拨号连接之 - phpStudy...
  11. imperva数据库脱敏-server2008
  12. 对比分析法(Comparative Analysis Approach)
  13. Bada学习-多任务模式
  14. 网络应用 1.计算机网络应用体系结构 2.网络应用通信基本原理 3.域名系统(DNS)4.万维网应用 5.Internet电子邮件 6.FTP 7.P2P应用 8.Socket编程基础
  15. 如何查看网络计算机ip,怎么查ip地址 如何查看(局域网/互联网)本机ip地址
  16. Error starting stream. VIDIOC_STREAMON: Protocol error Unable to use mmap. Using read instead. Unabl
  17. 机器人学笔记之——操作臂运动学:坐标系的标准命名以及工具的定位
  18. 故事:坐在我隔壁的小王问我什么是HyperLogLog
  19. 用Python机器人监听微信群聊, 我看谁这么大的胆子敢调侃老板和前台小姐姐!
  20. 推荐电影名字列表04.18

热门文章

  1. 关于JavaWeb的分页查询的实现
  2. [开心IT面试题] 关于50个人50条狗有几条病狗的推算
  3. python的自省与反射
  4. PWM脉冲宽度调制——它是什么?
  5. cocos制作水滴粘连效果
  6. Pandas:single positional indexer is out-of-bounds
  7. C语言教程(三):基础知识(续)
  8. 群发短信平台哪个好?推荐阿里云短信平台
  9. 将数组以逗号拼接_javascript将数组拼接成一个字符串
  10. 多种导出方式,教你快速将每个快递信息导出CSV表格