上篇博客说了ResNetDenseNet的原理,这次说说具体实现


ResNet

def basic_block(input, in_features, out_features, stride, is_training, keep_prob):"""Residual block"""if stride == 1:shortcut = inputelse:shortcut = tf.nn.avg_pool(input, [ 1, stride, stride, 1 ], [1, stride, stride, 1 ], 'VALID')shortcut = tf.pad(shortcut, [[0, 0], [0, 0], [0, 0],[(out_features-in_features)//2, (out_features-in_features)//2]])current = conv2d(input, in_features, out_features, 3, stride)current = tf.nn.dropout(current, keep_prob)current = tf.contrib.layers.batch_norm(current, scale=True, is_training=is_training, updates_collections=None)current = tf.nn.relu(current)current = conv2d(current, out_features, out_features, 3, 1)current = tf.nn.dropout(current, keep_prob)current = tf.contrib.layers.batch_norm(current, scale=True, is_training=is_training, updates_collections=None)return current + shortcutdef block_stack(input, in_features, out_features, stride, depth, is_training, keep_prob):"""Stack Residual block"""current = basic_block(input, in_features, out_features, stride, is_training, keep_prob)for _d in xrange(depth - 1):current = basic_block(current, out_features, out_features, 1, is_training, keep_prob)return current

DenseNet

def conv2d(input, in_features, out_features, kernel_size, with_bias=False):W = weight_variable([ kernel_size, kernel_size, in_features, out_features ])conv = tf.nn.conv2d(input, W, [ 1, 1, 1, 1 ], padding='SAME')if with_bias:return conv + bias_variable([ out_features ])return convdef batch_activ_conv(current, in_features, out_features, kernel_size, is_training, keep_prob):"""BatchNorm+Relu+conv+dropout"""current = tf.contrib.layers.batch_norm(current, scale=True, is_training=is_training, updates_collections=None)current = tf.nn.relu(current)current = conv2d(current, in_features, out_features, kernel_size)current = tf.nn.dropout(current, keep_prob)return currentdef block(input, layers, in_features, growth, is_training, keep_prob):"""Dense Block"""current = inputfeatures = in_featuresfor idx in xrange(layers):tmp = batch_activ_conv(current, features, growth, 3, is_training, keep_prob)current = tf.concat(3, (current, tmp))features += growthreturn current, featuresdef model():"""DenseNet on ImageNet"""current = tf.reshape(xs, [ -1, 32, 32, 3 ])  # Inputcurrent = conv2d(current, 3, 16, 3)current, features = block(current, layers, 16, 12, is_training, keep_prob)current = batch_activ_conv(current, features, features, 1, is_training, keep_prob)current = avg_pool(current, 2)current, features = block(current, layers, features, 12, is_training, keep_prob)current = batch_activ_conv(current, features, features, 1, is_training, keep_prob)current = avg_pool(current, 2)current, features = block(current, layers, features, 12, is_training, keep_prob)current = tf.contrib.layers.batch_norm(current, scale=True, is_training=is_training, updates_collections=None)current = tf.nn.relu(current)current = avg_pool(current, 8)final_dim = featurescurrent = tf.reshape(current, [ -1, final_dim ])Wfc = weight_variable([ final_dim, label_count ])bfc = bias_variable([ label_count ])ys_ = tf.nn.softmax( tf.matmul(current, Wfc) + bfc )

代码不是完整的,只是表达最navie的思想核心部分

ResNet DenseNet(实践篇)相关推荐

  1. [深度学习-总结]Deep learning中8大模型介绍与比较(LeNet5,AlexNet,VGG,Inception,MobileNets,ResNet,DenseNet,Senet)

    深度学习 9中模型介绍与比较 0. CNN 结构演化 1. LeNet5 2. AlexNet 3. VGG 为什么使用2个3x3卷积核可以来代替5*5卷积核 4. 1*1卷积 5. Inceptio ...

  2. 系统学习深度学习(二十)--ResNet,DenseNet,以及残差家族

    转自:http://blog.csdn.net/cv_family_z/article/details/50328175 CVPR2016 https://github.com/KaimingHe/d ...

  3. ResNet DenseNet(原理篇)

    这篇博客讲现在很流行的两种网络模型,ResNet和DenseNet,其实可以把DenseNet看做是ResNet的特例  文章地址:  [1]Deep Residual Learning for Im ...

  4. PyTorch - FashionMNIST + LeNet / AlexNet / VGG / GooLeNet / NiN / ResNet / DenseNet

    文章目录 项目说明 数据集说明 - FashionMNIST 算法说明 - LeNet-5 LeNet-5 网络结构 代码实现 数据准备 下载数据集 查看数据 定义网络 训练 设置参数 训练方法 验证 ...

  5. 极深网络(ResNet/DenseNet): Skip Connection为何有效及其它

    /* 版权声明:可以任意转载,转载时请标明文章原始出处和作者信息 .*/ Residual Network通过引入Skip Connection到CNN网络结构中,使得网络深度达到了千层的规模,并且其 ...

  6. resnet论文_ResNet还是DenseNet?即插即用的DS涨点神器来了!

    DSNet比ResNet取得了更好的结果,并且具有与DenseNet相当的性能,但需要的计算资源更少.其中改进的DS2Res2Net性能非常强大. 作者:ChaucerG Date:2020-10-2 ...

  7. 一点就分享系列(实践篇6——上篇)【迟到补发_详解yolov8】Yolo-high_level系列融入YOLOv8 旨在研究和兼容使用【3月份开始持续补更】

    一点就分享系列(实践篇5-补更篇)[迟到补发]-Yolo系列算法开源项目融入V8旨在研究和兼容使用[持续更新] 题外话 [最近一直在研究开放多模态泛化模型的应用事情,所以这部分内容会更新慢一些,文章和 ...

  8. densenet tensorflow 中文汉字手写识别

    densenet 中文汉字手写识别,代码如下: import tensorflow as tf import os import random import math import tensorflo ...

  9. CNN网络架构演进:从LeNet到DenseNet

    原文来源:https://www.cnblogs.com/skyfsm/p/8451834.html 卷积神经网络可谓是现在深度学习领域中大红大紫的网络框架,尤其在计算机视觉领域更是一枝独秀.CNN从 ...

最新文章

  1. C语言应用于LR中-如何得到数组长度
  2. yii2之原生sql
  3. Linq的一些操作符-图表展示
  4. 本地计算机端口流量,计算机和防火墙上的端口及其用途-101问题
  5. 图论--最长路--洛谷P1807 最长路_NOI导刊2010提高(07)
  6. html中如何设置图片填充颜色渐变,实现SVG图标的渐变填充效果
  7. matlab fname pname,求大神帮我解释一下matlab最后几行是什么意思
  8. 一次U3D DLL加密的记录(二)
  9. 普通地图的六大要素_地理知识点总结之地图基本要素
  10. 设计模式笔记 16. Mediator 中介者模式(行为型模式)
  11. 玩转 HMS Core 6.0,详解开发者该知道的黑科技……
  12. matlab中fft定点运算,可用于嵌入式计算的定点FFT算法 (转载)
  13. NSString+NSMutableString+NSValue+NSAraay用法汇总
  14. 2020考研初期作息时间表
  15. 产品经理常用的三款工具
  16. H5-农阳历日期互转并对应
  17. 北京同创蓝天的全景航拍技术如何?应用在哪些方面呢?
  18. 瑞萨单片USB设备使用
  19. 注册环节个人信息已“透明” App查询征信水有多深?
  20. 腾讯开源的标星 12k 的力作

热门文章

  1. 牛叉的装饰器,带参数语法糖
  2. Lastpass——密码管理工具
  3. LastPass 遭黑客攻击
  4. python:使用任意语言,递归地将某个磁盘目录下的 jpeg 文件的扩展名修改为 jpg【杭州多测师_王sir】【杭州多测师】...
  5. 解决Spring中使用quartz发生NotSerializableException methodInvoker的问题
  6. PHP一句话木马大全
  7. java json 正则_正则表达式替换json字符串
  8. ppt压缩文件大小,4个压缩教程
  9. 性能测试--需求指标计算
  10. 通过IP地址获取当前地理位置的接口(包含纬经度)