1. VGG-16介绍

vgg是在Very Deep Convolutional Networks for Large-Scale Image Recognition期刊上提出的。模型可以达到92.7%的测试准确度,在ImageNet的前5位。它的数据集包括1400万张图像,1000个类别。 
vgg-16是一种深度卷积神经网络模型,16表示其深度,在图像分类等任务中取得了不错的效果。 
vgg16 的宏观结构图如下。代码定义在tensorflow的vgg16.py文件 。注意,包括一个预处理层,使用RGB图像在0-255范围内的像素值减去平均值(在整个ImageNet图像训练集计算)。


2. 文件组成

模型权重 - vgg16_weights.npz 
TensorFlow模型- vgg16.py 
类名(输出模型到类名的映射) - imagenet_classes.py 
示例图片输入 - laska.png 
我们使用特定的工具转换了原作者在GitHub profile上公开可用的Caffe权重,并做了一些后续处理,以确保模型符合TensorFlow标准。最终实现可用的权重文件vgg16_weights.npz 
下载所有的文件到同一文件夹下,然后运行 python vgg16.py

- vgg16.py文件代码:

import tensorflow as tf
import numpy as np
from scipy.misc import imread, imresize
from imagenet_classes import class_namesclass vgg16:def __init__(self, imgs, weights=None, sess=None):self.imgs = imgsself.convlayers()self.fc_layers()self.probs = tf.nn.softmax(self.fc3l)if weights is not None and sess is not None:self.load_weights(weights, sess)def convlayers(self):self.parameters = []# zero-mean inputwith tf.name_scope('preprocess') as scope:mean = tf.constant([123.68, 116.779, 103.939], dtype=tf.float32, shape=[1, 1, 1, 3], name='img_mean')images = self.imgs-mean# conv1_1with tf.name_scope('conv1_1') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv1_1 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv1_2with tf.name_scope('conv1_2') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv1_1, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv1_2 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# pool1self.pool1 = tf.nn.max_pool(self.conv1_2,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool1')# conv2_1with tf.name_scope('conv2_1') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.pool1, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv2_1 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv2_2with tf.name_scope('conv2_2') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv2_1, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv2_2 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# pool2self.pool2 = tf.nn.max_pool(self.conv2_2,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool2')# conv3_1with tf.name_scope('conv3_1') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.pool2, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv3_1 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv3_2with tf.name_scope('conv3_2') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv3_1, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv3_2 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv3_3with tf.name_scope('conv3_3') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv3_2, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv3_3 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# pool3self.pool3 = tf.nn.max_pool(self.conv3_3,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool3')# conv4_1with tf.name_scope('conv4_1') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.pool3, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv4_1 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv4_2with tf.name_scope('conv4_2') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv4_1, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv4_2 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv4_3with tf.name_scope('conv4_3') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv4_2, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv4_3 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# pool4self.pool4 = tf.nn.max_pool(self.conv4_3,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool4')# conv5_1with tf.name_scope('conv5_1') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.pool4, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv5_1 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv5_2with tf.name_scope('conv5_2') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv5_1, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv5_2 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# conv5_3with tf.name_scope('conv5_3') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')conv = tf.nn.conv2d(self.conv5_2, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')out = tf.nn.bias_add(conv, biases)self.conv5_3 = tf.nn.relu(out, name=scope)self.parameters += [kernel, biases]# pool5self.pool5 = tf.nn.max_pool(self.conv5_3,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool4')def fc_layers(self):# fc1with tf.name_scope('fc1') as scope:shape = int(np.prod(self.pool5.get_shape()[1:]))fc1w = tf.Variable(tf.truncated_normal([shape, 4096],dtype=tf.float32,stddev=1e-1), name='weights')fc1b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),trainable=True, name='biases')pool5_flat = tf.reshape(self.pool5, [-1, shape])fc1l = tf.nn.bias_add(tf.matmul(pool5_flat, fc1w), fc1b)self.fc1 = tf.nn.relu(fc1l)self.parameters += [fc1w, fc1b]# fc2with tf.name_scope('fc2') as scope:fc2w = tf.Variable(tf.truncated_normal([4096, 4096],dtype=tf.float32,stddev=1e-1), name='weights')fc2b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),trainable=True, name='biases')fc2l = tf.nn.bias_add(tf.matmul(self.fc1, fc2w), fc2b)self.fc2 = tf.nn.relu(fc2l)self.parameters += [fc2w, fc2b]# fc3with tf.name_scope('fc3') as scope:fc3w = tf.Variable(tf.truncated_normal([4096, 1000],dtype=tf.float32,stddev=1e-1), name='weights')fc3b = tf.Variable(tf.constant(1.0, shape=[1000], dtype=tf.float32),trainable=True, name='biases')self.fc3l = tf.nn.bias_add(tf.matmul(self.fc2, fc3w), fc3b)self.parameters += [fc3w, fc3b]def load_weights(self, weight_file, sess):weights = np.load(weight_file)keys = sorted(weights.keys())for i, k in enumerate(keys):print i, k, np.shape(weights[k])sess.run(self.parameters[i].assign(weights[k]))if __name__ == '__main__':sess = tf.Session()imgs = tf.placeholder(tf.float32, [None, 224, 224, 3])vgg = vgg16(imgs, 'vgg16_weights.npz', sess)img1 = imread('laska.png', mode='RGB')img1 = imresize(img1, (224, 224))prob = sess.run(vgg.probs, feed_dict={vgg.imgs: [img1]})[0]preds = (np.argsort(prob)[::-1])[0:5]for p in preds:#print class_names[p], prob[p]print("class_name {}: step {}".format(class_names[p], prob[p]))

运行,测试

测试1:

输入图片为laska.png

运行结果:

2018-03-23 11:04:38.311802: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-23 11:04:38.311873: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
0 conv1_1_W (3, 3, 3, 64)
1 conv1_1_b (64,)
2 conv1_2_W (3, 3, 64, 64)
3 conv1_2_b (64,)
4 conv2_1_W (3, 3, 64, 128)
5 conv2_1_b (128,)
6 conv2_2_W (3, 3, 128, 128)
7 conv2_2_b (128,)
8 conv3_1_W (3, 3, 128, 256)
9 conv3_1_b (256,)
10 conv3_2_W (3, 3, 256, 256)
11 conv3_2_b (256,)
12 conv3_3_W (3, 3, 256, 256)
13 conv3_3_b (256,)
14 conv4_1_W (3, 3, 256, 512)
15 conv4_1_b (512,)
16 conv4_2_W (3, 3, 512, 512)
17 conv4_2_b (512,)
18 conv4_3_W (3, 3, 512, 512)
19 conv4_3_b (512,)
20 conv5_1_W (3, 3, 512, 512)
21 conv5_1_b (512,)
22 conv5_2_W (3, 3, 512, 512)
23 conv5_2_b (512,)
24 conv5_3_W (3, 3, 512, 512)
25 conv5_3_b (512,)
26 fc6_W (25088, 4096)
27 fc6_b (4096,)
28 fc7_W (4096, 4096)
29 fc7_b (4096,)
30 fc8_W (4096, 1000)
31 fc8_b (1000,)
class_name **weasel**: step 0.693385839462
class_name polecat, fitch, foulmart, foumart, Mustela putorius: step 0.175387635827
class_name mink: step 0.12208583951
class_name black-footed ferret, ferret, Mustela nigripes: step 0.00887066219002
class_name otter: step 0.000121083263366

分类结果为weasel

测试2: 
输入图片为多场景

运行结果为:

2018-03-23 11:15:22.718228: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-23 11:15:22.718297: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
0 conv1_1_W (3, 3, 3, 64)
1 conv1_1_b (64,)
2 conv1_2_W (3, 3, 64, 64)
3 conv1_2_b (64,)
4 conv2_1_W (3, 3, 64, 128)
5 conv2_1_b (128,)
6 conv2_2_W (3, 3, 128, 128)
7 conv2_2_b (128,)
8 conv3_1_W (3, 3, 128, 256)
9 conv3_1_b (256,)
10 conv3_2_W (3, 3, 256, 256)
11 conv3_2_b (256,)
12 conv3_3_W (3, 3, 256, 256)
13 conv3_3_b (256,)
14 conv4_1_W (3, 3, 256, 512)
15 conv4_1_b (512,)
16 conv4_2_W (3, 3, 512, 512)
17 conv4_2_b (512,)
18 conv4_3_W (3, 3, 512, 512)
19 conv4_3_b (512,)
20 conv5_1_W (3, 3, 512, 512)
21 conv5_1_b (512,)
22 conv5_2_W (3, 3, 512, 512)
23 conv5_2_b (512,)
24 conv5_3_W (3, 3, 512, 512)
25 conv5_3_b (512,)
26 fc6_W (25088, 4096)
27 fc6_b (4096,)
28 fc7_W (4096, 4096)
29 fc7_b (4096,)
30 fc8_W (4096, 1000)
31 fc8_b (1000,)
class_name alp: step 0.830908000469
class_name church, church building: step 0.0817768126726
class_name castle: step 0.024959910661
class_name valley, vale: step 0.0158758834004
class_name monastery: step 0.0100631769747

分类结果把高山,教堂,城堡,山谷,修道院都识别出来了,效果非常不错,虽然各种精度不高,但是类别是齐全的。 
测试3: 
输入图片为

运行结果为

2018-03-23 11:34:50.490069: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-23 11:34:50.490137: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
0 conv1_1_W (3, 3, 3, 64)
1 conv1_1_b (64,)
2 conv1_2_W (3, 3, 64, 64)
3 conv1_2_b (64,)
4 conv2_1_W (3, 3, 64, 128)
5 conv2_1_b (128,)
6 conv2_2_W (3, 3, 128, 128)
7 conv2_2_b (128,)
8 conv3_1_W (3, 3, 128, 256)
9 conv3_1_b (256,)
10 conv3_2_W (3, 3, 256, 256)
11 conv3_2_b (256,)
12 conv3_3_W (3, 3, 256, 256)
13 conv3_3_b (256,)
14 conv4_1_W (3, 3, 256, 512)
15 conv4_1_b (512,)
16 conv4_2_W (3, 3, 512, 512)
17 conv4_2_b (512,)
18 conv4_3_W (3, 3, 512, 512)
19 conv4_3_b (512,)
20 conv5_1_W (3, 3, 512, 512)
21 conv5_1_b (512,)
22 conv5_2_W (3, 3, 512, 512)
23 conv5_2_b (512,)
24 conv5_3_W (3, 3, 512, 512)
25 conv5_3_b (512,)
26 fc6_W (25088, 4096)
27 fc6_b (4096,)
28 fc7_W (4096, 4096)
29 fc7_b (4096,)
30 fc8_W (4096, 1000)
31 fc8_b (1000,)
class_name cup: step 0.543631911278
class_name coffee mug: step 0.364796578884
class_name pitcher, ewer: step 0.0259610358626
class_name eggnog: step 0.0117611540481
class_name water jug: step 0.00806392729282

分类结果为cup 
测试4: 
输入图片为

运行结果为

018-03-23 11:37:23.573090: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-23 11:37:23.573159: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
0 conv1_1_W (3, 3, 3, 64)
1 conv1_1_b (64,)
2 conv1_2_W (3, 3, 64, 64)
3 conv1_2_b (64,)
4 conv2_1_W (3, 3, 64, 128)
5 conv2_1_b (128,)
6 conv2_2_W (3, 3, 128, 128)
7 conv2_2_b (128,)
8 conv3_1_W (3, 3, 128, 256)
9 conv3_1_b (256,)
10 conv3_2_W (3, 3, 256, 256)
11 conv3_2_b (256,)
12 conv3_3_W (3, 3, 256, 256)
13 conv3_3_b (256,)
14 conv4_1_W (3, 3, 256, 512)
15 conv4_1_b (512,)
16 conv4_2_W (3, 3, 512, 512)
17 conv4_2_b (512,)
18 conv4_3_W (3, 3, 512, 512)
19 conv4_3_b (512,)
20 conv5_1_W (3, 3, 512, 512)
21 conv5_1_b (512,)
22 conv5_2_W (3, 3, 512, 512)
23 conv5_2_b (512,)
24 conv5_3_W (3, 3, 512, 512)
25 conv5_3_b (512,)
26 fc6_W (25088, 4096)
27 fc6_b (4096,)
28 fc7_W (4096, 4096)
29 fc7_b (4096,)
30 fc8_W (4096, 1000)
31 fc8_b (1000,)
class_name cellular telephone, cellular phone, cellphone, cell, mobile phone: step 0.465327292681
class_name iPod: step 0.10543012619
class_name radio, wireless: step 0.0810257941484
class_name hard disc, hard disk, fixed disk: step 0.0789099931717
class_name modem: step 0.0603163056076

分类结果为 cellular telephone 
测试5: 
输入图片为

运行结果为

2018-03-23 11:40:40.956946: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-23 11:40:40.957016: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
0 conv1_1_W (3, 3, 3, 64)
1 conv1_1_b (64,)
2 conv1_2_W (3, 3, 64, 64)
3 conv1_2_b (64,)
4 conv2_1_W (3, 3, 64, 128)
5 conv2_1_b (128,)
6 conv2_2_W (3, 3, 128, 128)
7 conv2_2_b (128,)
8 conv3_1_W (3, 3, 128, 256)
9 conv3_1_b (256,)
10 conv3_2_W (3, 3, 256, 256)
11 conv3_2_b (256,)
12 conv3_3_W (3, 3, 256, 256)
13 conv3_3_b (256,)
14 conv4_1_W (3, 3, 256, 512)
15 conv4_1_b (512,)
16 conv4_2_W (3, 3, 512, 512)
17 conv4_2_b (512,)
18 conv4_3_W (3, 3, 512, 512)
19 conv4_3_b (512,)
20 conv5_1_W (3, 3, 512, 512)
21 conv5_1_b (512,)
22 conv5_2_W (3, 3, 512, 512)
23 conv5_2_b (512,)
24 conv5_3_W (3, 3, 512, 512)
25 conv5_3_b (512,)
26 fc6_W (25088, 4096)
27 fc6_b (4096,)
28 fc7_W (4096, 4096)
29 fc7_b (4096,)
30 fc8_W (4096, 1000)
31 fc8_b (1000,)
class_name water bottle: step 0.75726544857
class_name pop bottle, soda bottle: step 0.0976340323687
class_name nipple: step 0.0622750669718
class_name water jug: step 0.0233819428831
class_name soap dispenser: step 0.017936654388

分类结果为 water bottle 
参考文档

https://blog.csdn.net/daydayup_668819/article/details/79651137

基于tensorflow + Vgg16进行图像分类识别相关推荐

  1. Tensorflow【实战Google深度学习框架】基于tensorflow + Vgg16进行图像分类识别

    文章目录 1.VGG-16介绍 2. 文件组成 - vgg16.py文件代码: 运行,测试 参考 1.VGG-16介绍 vgg是在Very Deep Convolutional Networks fo ...

  2. 基于tensorflow、CNN网络识别花卉的种类(图像识别)

    基于tensorflow.CNN网络识别花卉的种类 这是一个图像识别项目,基于 tensorflow,现有的 CNN 网络可以识别四种花的种类.适合新手对使用 tensorflow进行一个完整的图像识 ...

  3. 猫狗大战——基于TensorFlow的猫狗识别(2)

    微信公众号:龙跃十二 我是小玉,一个平平无奇的小天才! 上篇文章我们说了关于猫狗大战这个项目的一些准备工作,接下来,我们看看具体的代码详解. 猫狗大战--基于TensorFlow的猫狗识别(1) 文件 ...

  4. python神经网络库识别验证码_基于TensorFlow 使用卷积神经网络识别字符型图片验证码...

    本项目使用卷积神经网络识别字符型图片验证码,其基于TensorFlow 框架.它封装了非常通用的校验.训练.验证.识别和调用 API,极大地减低了识别字符型验证码花费的时间和精力. 项目地址:http ...

  5. 基于Tensorflow实现声纹识别

    前言 本章介绍如何使用Tensorflow实现简单的声纹识别模型,首先你需要熟悉音频分类,没有了解的可以查看这篇文章<基于Tensorflow实现声音分类>.基于这个知识基础之上,我们训练 ...

  6. 基于TensorFlow的简单验证码识别

    TensorFlow 可以用来实现验证码识别的过程,这里识别的验证码是图形验证码,首先用标注好的数据来训练一个模型,然后再用模型来实现这个验证码的识别. 生成验证码 首先生成验证码,这里使用 Pyth ...

  7. 基于TensorFlow Lite的人声识别在端上的实现

    通过TensorFlow Lite,移动终端.IoT设备可以在端上实现声音识别,这可以应用在安防.医疗监护等领域.来自阿里巴巴闲鱼技术互动组仝辉和上叶通过TensorFlow Lite实现了一套完整的 ...

  8. 猫狗大战——基于TensorFlow的猫狗识别(1)

    微信公众号:龙跃十二 我是小玉,一个平平无奇的小天才! 简介: 关于猫狗识别是机器学习和深度学习的一个经典实例,下来小玉把自己做的基于CNN卷积神经网络利用Tensorflow框架进行猫狗的识别的程序 ...

  9. 基于TensorFlow的手写体数字识别

    目录 一.MNIST数据集介绍 二.原理 2.1.卷积神经网络简介( convolutional neural network 简称CNN) 2.1.1卷积运算过程 2.1.2滑动的步长 2.1.3卷 ...

最新文章

  1. Lumion模型库 Unique Pro Lumion Library 2021
  2. js检测字符串方法大全
  3. 从 Google 的一道面试题谈谈数学基础的重要性
  4. 苹果也开始打价格战了
  5. 潮流电商平台毒App正式改名了!
  6. 《Python数据科学实践指南》——1.2 Python解释器
  7. html 压缩工具 html-minifier
  8. Python使用matplotlib可视化模拟学生课程分数雷达图
  9. 2018美赛数学建模竞赛论文(隐私成本)
  10. [生产力]在线免费的EDA工具,可编辑AD\EAGLE等文件
  11. 【数据结构实验】单链表实验
  12. 自适应中值滤波器和自适应局部(均值)滤波器的设计 python+matlab各实现
  13. 一款不错的远程控制软件,还是绿色版哦
  14. 深度学习#tensorflow进阶
  15. python-合并两个列表并去重
  16. 快速确定针式打印机故障部位方法
  17. 佳能eosr控制环能否计算机控制,EOS R有哪些隐藏功能
  18. 李笑来对《把时间当朋友》的高度概括
  19. android bootcamp 2019 之 Core audio
  20. 如何免费添加QQ空间的音乐

热门文章

  1. ipixsoft swf to html5 converter,iPixSoft SWF to MOV Converter
  2. 2020-11-13 什么是代码中的魔鬼数字,如何解决?【转载】
  3. SpringSecurity密码加密存储
  4. 即时函数(Immediate Functions)
  5. revit二开中的图名标注族
  6. 出门戴什么耳机好、质量最好的骨传导耳机品牌
  7. 一份北大信科内部流传的 CS 自救指南
  8. Elasticsearch:使用 fuzziness 来进行搜索
  9. 利用pycharm使用pytorch
  10. hive 解析json字符串