Ch 6: Neural Networks

Neural Networks are very important in machine learning and growing in popularity due to the major breakthroughs in prior unsolved problems. We must start with introducing ‘shallow’ neural networks, which are very powerful and can help us improve our prior ML algorithm results. We start by introducing the very basic NN unit, the operational gate. We gradually add more and more to the neural network and end with training a model to play tic-tac-toe.

  1. Introduction

    • We introduce the concept of neural networks and how TensorFlow is built to easily handle these algorithms.
  2. Implementing Operational Gates
    • We implement an operational gate with one operation. Then we show how to extend this to multiple nested operations.
  3. Working with Gates and Activation Functions
    • Now we have to introduce activation functions on the gates. We show how different activation functions operate.
  4. Implementing a One Layer Neural Network
    • We have all the pieces to start implementing our first neural network. We do so here with regression on the Iris data set.
  5. Implementing Different Layers
    • This section introduces the convolution layer and the max-pool layer. We show how to chain these together in a 1D and 2D example with fully connected layers as well.
  6. Using Multi-layer Neural Networks
    • Here we show how to functionalize different layers and variables for a cleaner multi-layer neural network.
  7. Improving Predictions of Linear Models
    • We show how we can improve the convergence of our prior logistic regression with a set of hidden layers.
  8. Learning to Play Tic-Tac-Toe
    • Given a set of tic-tac-toe boards and corresponding optimal moves, we train a neural network classification model to play. At the end of the script, we can attempt to play against the trained model.

02 Implementing an Operational Gate

# Implementing Gates
#----------------------------------
#
# This function shows how to implement
# various gates in TensorFlow
#
# One gate will be one operation with
# a variable and a placeholder.
# We will ask TensorFlow to change the
# variable based on our loss functionimport tensorflow as tf
from tensorflow.python.framework import ops
ops.reset_default_graph()# Start Graph Session
sess = tf.Session()#----------------------------------
# Create a multiplication gate:
#   f(x) = a * x
#
#  a --
#      |
#      |---- (multiply) --> output
#  x --|
#a = tf.Variable(tf.constant(4.))
x_val = 5.
x_data = tf.placeholder(dtype=tf.float32)multiplication = tf.multiply(a, x_data)# Declare the loss function as the difference between
# the output and a target value, 50.
loss = tf.square(tf.subtract(multiplication, 50.))# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)# Run loop across gate
print('Optimizing a Multiplication Gate Output to 50.')
for i in range(10):sess.run(train_step, feed_dict={x_data: x_val})a_val = sess.run(a)mult_output = sess.run(multiplication, feed_dict={x_data: x_val})print(str(a_val) + ' * ' + str(x_val) + ' = ' + str(mult_output))#----------------------------------
# Create a nested gate:
#   f(x) = a * x + b
#
#  a --
#      |
#      |-- (multiply)--
#  x --|              |
#                     |-- (add) --> output
#                 b --|
#
## Start a New Graph Session
ops.reset_default_graph()
sess = tf.Session()a = tf.Variable(tf.constant(1.))
b = tf.Variable(tf.constant(1.))
x_val = 5.
x_data = tf.placeholder(dtype=tf.float32)two_gate = tf.add(tf.multiply(a, x_data), b)# Declare the loss function as the difference between
# the output and a target value, 50.
loss = tf.square(tf.subtract(two_gate, 50.))# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)# Run loop across gate
print('\nOptimizing Two Gate Output to 50.')
for i in range(10):sess.run(train_step, feed_dict={x_data: x_val})a_val, b_val = (sess.run(a), sess.run(b))two_gate_output = sess.run(two_gate, feed_dict={x_data: x_val})print(str(a_val) + ' * ' + str(x_val) + ' + ' + str(b_val) + ' = ' + str(two_gate_output))
Optimizing a Multiplication Gate Output to 50.
7.0 * 5.0 = 35.0
8.5 * 5.0 = 42.5
9.25 * 5.0 = 46.25
9.625 * 5.0 = 48.125
9.8125 * 5.0 = 49.0625
9.90625 * 5.0 = 49.5313
9.95313 * 5.0 = 49.7656
9.97656 * 5.0 = 49.8828
9.98828 * 5.0 = 49.9414
9.99414 * 5.0 = 49.9707Optimizing Two Gate Output to 50.
5.4 * 5.0 + 1.88 = 28.88
7.512 * 5.0 + 2.3024 = 39.8624
8.52576 * 5.0 + 2.50515 = 45.134
9.01236 * 5.0 + 2.60247 = 47.6643
9.24593 * 5.0 + 2.64919 = 48.8789
9.35805 * 5.0 + 2.67161 = 49.4619
9.41186 * 5.0 + 2.68237 = 49.7417
9.43769 * 5.0 + 2.68754 = 49.876
9.45009 * 5.0 + 2.69002 = 49.9405
9.45605 * 5.0 + 2.69121 = 49.9714

03 Working with Activation Functions

# Combining Gates and Activation Functions
#----------------------------------
#
# This function shows how to implement
# various gates with activation functions
# in TensorFlow
#
# This function is an extension of the
# prior gates, but with various activation
# functions.import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.python.framework import ops
ops.reset_default_graph()# Start Graph Session
#修改位置
config = tf.ConfigProto(allow_soft_placement= True, log_device_placement= True)
sess = tf.Session(config= config)
#sess = tf.Session()
tf.set_random_seed(5)
np.random.seed(42)batch_size = 50a1 = tf.Variable(tf.random_normal(shape=[1,1]))
b1 = tf.Variable(tf.random_uniform(shape=[1,1]))
a2 = tf.Variable(tf.random_normal(shape=[1,1]))
b2 = tf.Variable(tf.random_uniform(shape=[1,1]))
x = np.random.normal(2, 0.1, 500)
x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)sigmoid_activation = tf.sigmoid(tf.add(tf.matmul(x_data, a1), b1))relu_activation = tf.nn.relu(tf.add(tf.matmul(x_data, a2), b2))# Declare the loss function as the difference between
# the output and a target value, 0.75.
loss1 = tf.reduce_mean(tf.square(tf.subtract(sigmoid_activation, 0.75)))
loss2 = tf.reduce_mean(tf.square(tf.subtract(relu_activation, 0.75)))# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step_sigmoid = my_opt.minimize(loss1)
train_step_relu = my_opt.minimize(loss2)# Run loop across gate
print('\nOptimizing Sigmoid AND Relu Output to 0.75')
loss_vec_sigmoid = []
loss_vec_relu = []
for i in range(500):rand_indices = np.random.choice(len(x), size=batch_size)x_vals = np.transpose([x[rand_indices]])sess.run(train_step_sigmoid, feed_dict={x_data: x_vals})sess.run(train_step_relu, feed_dict={x_data: x_vals})loss_vec_sigmoid.append(sess.run(loss1, feed_dict={x_data: x_vals}))loss_vec_relu.append(sess.run(loss2, feed_dict={x_data: x_vals}))    sigmoid_output = np.mean(sess.run(sigmoid_activation, feed_dict={x_data: x_vals}))relu_output = np.mean(sess.run(relu_activation, feed_dict={x_data: x_vals}))if i%50==0:print('sigmoid = ' + str(np.mean(sigmoid_output)) + ' relu = ' + str(np.mean(relu_output)))# Plot the loss
plt.plot(loss_vec_sigmoid, 'k-', label='Sigmoid Activation')
plt.plot(loss_vec_relu, 'r--', label='Relu Activation')
plt.ylim([0, 1.0])
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.legend(loc='upper right')
plt.show()
Optimizing Sigmoid AND Relu Output to 0.75
sigmoid = 0.126552 relu = 2.02276
sigmoid = 0.178638 relu = 0.75303
sigmoid = 0.247698 relu = 0.74929
sigmoid = 0.344675 relu = 0.749955
sigmoid = 0.440066 relu = 0.754
sigmoid = 0.52369 relu = 0.754772
sigmoid = 0.583739 relu = 0.75087
sigmoid = 0.627335 relu = 0.747023
sigmoid = 0.65495 relu = 0.751805
sigmoid = 0.674526 relu = 0.754707

TF/06_Neural_Networks/01_Introduction02gate03activate fuctions相关推荐

  1. TFRecord tf.train.Feature

    一.定义 事先将数据编码为二进制的TFRecord文件,配合TF自带的多线程API,读取效率最高,且跨平台,适合规范化存储复杂的数据.上图为TFRecord的pb格式定义,可发现每个TFRecord由 ...

  2. tf.get_variable

    tf.get_variable的使用方法 参数数量及其作用 例子 参数数量及其作用 该函数共有十一个参数,常用的有:名称name.变量规格shape.变量类型dtype.变量初始化方式initiali ...

  3. 通俗理解tf.nn.conv2d() tf.nn.conv3d( )参数的含义 pytorhc 卷积

    20210609 例如(3,3,(3,7,7))表示的是输入图像的通道数是3,输出图像的通道数是3,(3,7,7)表示过滤器每次处理3帧图像,卷积核的大小是3 x 7 x 7. https://blo ...

  4. tf.concat()详解

    tensorflow中用来拼接张量的函数tf.concat(),用法: tf.concat([tensor1, tensor2, tensor3,...], axis) 先给出tf源代码中的解释: t ...

  5. tensorflow兼容处理 tensorflow.compat.v1 tf.contrib

    20201130 问题提出: v1版本中tensorflow中contrib模块十分丰富,但是发展不可控,因此在v2版本中将这个模块集成到其他模块中去了.在学习tensorflow经常碰到tf.con ...

  6. 【Tensorflow】tf.nn.atrous_conv2d如何实现空洞卷积?膨胀卷积

    介绍 关于空洞卷积的理论可以查看以下链接,这里我们不详细讲理论: 1.Long J, Shelhamer E, Darrell T, et al. Fully convolutional networ ...

  7. 第十六节,使用函数封装库tf.contrib.layers

    目录 一 tf.contrib.layers中的具体函数介绍 1.tf.contrib.layers.conv2d()函数的定义如下: 2.tf.contrib.layers.max_pool2d() ...

  8. 【TensorFlow】理解tf.nn.conv2d方法 ( 附代码详解注释 )

    最近在研究学习TensorFlow,在做识别手写数字的demo时,遇到了tf.nn.conv2d这个方法,查阅了官网的API 发现讲得比较简略,还是没理解.google了一下,参考了网上一些朋友写得博 ...

  9. tf.expand_dims()

    tf.expand_dims() 转载:https://blog.csdn.net/jasonzzj/article/details/60811035 TensorFlow中,想要维度增加一维,可以使 ...

最新文章

  1. 【廖雪峰python入门笔记】break和continue
  2. python回复qq消息_自动给qq好友发消息
  3. Java中intern()方法的作用
  4. fla 走迷宫游戏 源码_迷宫新玩法,果断一试
  5. U-GAT-IT中的一些细节以及变量含义
  6. 探讨Netty获取并检查Websocket握手请求的两种方式
  7. python 视频转场_Python 带你高效创作短视频,视频创作秀到飞起!!!
  8. Project 4:用户画像的建立
  9. 条码仓库管理系统在食品行业中的应用
  10. 子慕谈设计模式系列(三)
  11. 陶哲轩实分析 4.4 节习题试解
  12. PAT B 1068 万绿丛中一点红(C语言)*排除法
  13. android的筛选功能,android实现筛选菜单效果
  14. 转载MPEG4 H.264学习笔记 ------ 视频格式与质量
  15. windows系统IP地址、localhost、127.0.0.1 、0.0.0.0和 本机IP区别
  16. Android中第三方SDK集成之ZXing二维码扫一扫集成指南
  17. html5border设置彩色,css中border颜色不同怎么设置?
  18. UML建模与软件开发设计(六)——类图设计与类之间的关系
  19. Manjaro + Windows 双系统安装指南
  20. 202101-话/镜:世界因语言而不同

热门文章

  1. 蒟蒻の算法题(~~完全不会~~的期望)01
  2. 微信dat转码-微信数据库解密-dat批量查看
  3. 考研英语 - word-list-29
  4. 商业银行合规管理用OA:“上报、评估、整改、分析”全面数字化
  5. 大学毕业生参考信函提示
  6. 已经有了阿里云OSS还需要开通CDN吗?
  7. layui 借助 parseData 回调函数解析table 组件所规定的数据格式
  8. 从纯洁男孩到堕落男人
  9. 微信小程序采坑三:输入框设置自动获取焦点后无法自动获取焦点
  10. win10重置失败,重装系统踩坑