文章目录

  • 1. Introduction
  • 2. Search Space
    • 2.1搜索空间定义:
    • 2.2 常见的搜索空间举例:
      • 2.2.1 简单链式搜索空间:
      • 2.2.2 复杂多分支搜索空间
      • 2.2.3 块搜索空间
  • 3.Search Strategy
    • 3.1 强化学习
    • 3.2 进化算法
    • 3.3 贝叶斯优化
    • 3.4 梯度优化方法
    • 3.5 其他优化方法
  • 4. Performance Estimation Strategy
    • 4.1 朴素性能评估方法
    • 4.2 低保真度评估(Lower fidelity estimates)
    • 4.3 学习曲线外推(Learning Curve Extrapolation)
    • 4.4 权值继承(Weight Inheritance)与网络态射(Network Morphisms)
    • 4.5 One-Shot Models 与权值共享(Weight Sharing)
  • 5. Future Directions

参考文献:Thomas E, Jan Hendrik M, Frank H. Neural Architecture Search: A Survey[J]. Journal of Machine Learning Research, 2019, 20(55):1–21. https://arxiv.org/abs/1808.05377v2、 http://jmlr.org/papers/volume20/18-598/18-598.pdf

1. Introduction

图1 NAS方法抽象图示。搜索策略从预定义的搜索空间AAA中挑选一个结构A进行性能评估,并将评估结果反馈给搜索策略。

2. Search Space

2.1搜索空间定义:

The search space defines which neural architectures a NAS approach might discover in principle.

2.2 常见的搜索空间举例:

2.2.1 简单链式搜索空间:

抽象化定义:如图2中左图所示:A chain-structured neural network architecture A can be written as a sequence of n layers, where the i’th layer Li receives its input from layer i − 1 and its output serves as the input for layer i + 1, i.e., A = Ln ◦ . . . L1 ◦ L0.
具象化定义:简单链式搜索空间的参数:(1)(最大)层数n;(2)每一层执行的运算类型,如池化、卷积或其他更先进的运算,例如深度可分离卷积 (Chollet, 2016)或空洞卷积 (Yu and Koltun, 2016);(3)与各层运算有关的超参数,例如滤波器的个数、卷积核的尺寸、全连接网络的单元数等。(3)中的超参数受制于(2)中的参数选取(其实(2)中的参数也受制于(1)的确定),所以简单链式搜索空间是不定长的且属于条件空间(conditional space)。

2.2.2 复杂多分支搜索空间

抽象化定义:如图2中右图所示,Recent work on NAS incorporates modern design elements known from hand-crafted architectures, such as skip connections, which allow to build complex, multi-branch networks
引入现代结构的相关工作:
Andrew Brock. SMASH: one-shot model architecture search through hypernetworks. In NIPS Workshop on Meta-Learning, 2017
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Simple And Efficient Architecture Search for Convolutional Neural Networks. In NIPS Workshop on Meta-Learning, 2017.
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. In Conference on Computer Vision and Pattern Recognition, 2018.
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In International Conference on Learning Representations, 2019.
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Aging Evolution for Image Classifier Architecture Search. In AAAI, 2019.
Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network Transformation for Efficient Architecture Search. In International Conference on Machine Learning, June 2018b.
具象化定义: In this case the input of layer i can be formally described as a function gi(Li−1out,...,L0out)g_i(L_{i-1}^{out}, ...,L_{0}^{out})gi​(Li−1out​,...,L0out​)combining previous layer outputs.
几个特例
(i)简单链式结构:gi(Li−1out,...,L0out)=Li−1outg_i(L_{i-1}^{out}, ...,L_{0}^{out})=L_{i-1}^{out}gi​(Li−1out​,...,L0out​)=Li−1out​
(ii)残差网络:gi(Li−1out,...,L0out)=Li−1out+Ljout,j<i−1g_i(L_{i-1}^{out}, ...,L_{0}^{out})=L_{i-1}^{out}+L_{j}^{out},j<i-1gi​(Li−1out​,...,L0out​)=Li−1out​+Ljout​,j<i−1
(iii)DenseNets: gi(Li−1out,...,L0out)=concat(Li−1out,...,L0out)g_i(L_{i-1}^{out}, ...,L_{0}^{out})=concat(L_{i-1}^{out}, ...,L_{0}^{out})gi​(Li−1out​,...,L0out​)=concat(Li−1out​,...,L0out​)

2.2.3 块搜索空间

受以下工作的启发,重复主题可以构成高性能网络:
InceptionNet、ResNet、DenseNet
BlockQNN: Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, and Cheng-Lin Liu. Practical block-wise neural network architecture generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2423–2432, 2018a.


块搜索空间的三大优势:

  1. 搜索空间维度得以降低,The size of the search space is drastically reduced.
  2. 迁移能力强,Architectures built from cells can more easily be transferred or adapted to other data sets by simply varying the number of cells and filters used within a model.
  3. 同时适用于CNN, RNN,Creating architectures by repeating building blocks has proven a useful design principle in general

鉴于上述优势,采用块搜索空间的代表性研究工作包括:
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Aging Evolution for Image Classifier Architecture Search. In AAAI, 2019.
PNAS:Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive Neural Architecture Search. In European Conference on Computer Vision, 2018a.
ENAS: Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In International Conference on Machine Learning, 2018.
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In International Conference on Learning Representations, 2019.
Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network Transformation for Efficient Architecture Search. In International Conference on Machine Learning, June 2018b.
DARTS: Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019b.
BlockQNN: Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, and Cheng-Lin Liu. Practical block-wise neural network architecture generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2423–2432, 2018a.

但使用块搜索空间时,需要考虑一些新的设计选择,即如何选择宏结构(macro-architecture
)的问题:how many cells shall be used and how should they be connected to build the actual model?
理想情况下,宏结构和块内微观结构需要同时优化,但现实中,可操作的优化方案往往是对搜索空间分层次讨论。The first level consists of the set of primitive operations, the second level of different motifs that connect primitive operations via a directed acyclic graph, the third level of motifs that encode how to connect second-level motifs, and so on.
Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical Representations for Efficient Architecture Search. In International Conference on Learning Representations, 2018b.

3.Search Strategy

许多搜索策略都可用于探索神经架构空间, including random search(RS), Bayesian optimization(BO), evolutionary algorithms(EA), reinforcement learning (RL), and gradient-based methods(GM)
机器学习时代大多用EA优化神经架构
BO在DNN超参数优化(Hyper Parameter Optimization)中取得较好效果

3.1 强化学习

RL推动了NAS的研究热潮:Different RL approaches differ in how they represent the agent’s policy and how they optimize it.
(1) use a recurrent neural network (RNN) policy to sequentially sample a string that in turn encodes the neural architecture,包括以下工作
Barret Zoph: NAS-RL
Bowen Baker: MetaQNN
NASNet PPO
BlockQNN、Faster BlockQNN
(2)将上述RL模型继续简化为MAB(多臂老虎机问题)模型,使用bi-directional LSTM编码变长网络架构,使用REINFORCE策略梯度端到端训练该bi-directional LSTM,包括以下工作:
Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In Association for the Advancement of Artificial Intelligence, 2018a.
Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network Transformation for Efficient Architecture Search. In International Conference on Machine Learning, June 2018b.

3.2 进化算法

Neuro-evolutionary methods differ in how they sample parents, update populations, and generate offsprings.

3.3 贝叶斯优化

3.4 梯度优化方法

3.5 其他优化方法

Monte Carlo Tree Search

hill climbing

4. Performance Estimation Strategy

4.1 朴素性能评估方法

架构搜索旨在发现一个能够最大化某些性能度量的神经网络架构,为指导搜索过程,架构搜索策略需要准确估计给定架构A的性能。最简单的评估方式为在训练集上训练该网络并在测试集上评估泛化性能。但这种性能估计策略通常需要消耗几千GPU days的计算量(NASNet、AmoebaNet),在算力资源受限的条件下进行NAS必须研究性能评估加速方案。

4.2 低保真度评估(Lower fidelity estimates)

通过降低保真度进行性能估计的方法可分为以下四个方向:

  • shorter training times:
    NASNet Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. In Conference on Computer Vision and Pattern Recognition, 2018.
    Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. In ICML 2018 Workshop on AutoML (AutoML 2018), 2018.
  • training on a subset of the data
    Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, and Frank Hutter. Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 528–536, Fort Lauderdale, FL, USA, 20–22 Apr 2017b. PMLR.
  • training on lower-resolution images
    Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of imagenet as an alternative to the CIFAR datasets. CoRR, abs/1707.08819, 2017.
  • training with less filters per layer and less cells
    NASNet: Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. In Conference on Computer Vision and Pattern Recognition, 2018.
    AmoebaNet: Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Aging Evolution for Image Classifier Architecture Search. In AAAI, 2019.

但Zela研究表明但:性能评估近似不精确时,相对性能排序会发生剧变,
Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. In ICML 2018 Workshop on AutoML (AutoML 2018), 2018.
因此需要逐渐增加保真度以减小误差。
Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: bandit-based configuration evaluation for hyperparameter optimization. In International Conference on Learning Representations, 2017.
Stefan Falkner, Aaron Klein, and Frank Hutter. BOHB: Robust and efficient hyperparameter optimization at scale. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings ofMachine Learning Research, pages 1436–1445, Stockholmsmssan, Stockholm Sweden, 10–15 Jul 2018. PMLR.

4.3 学习曲线外推(Learning Curve Extrapolation)

  • 通过曲线外推,提早终结性能差的结构的训练过程以加速架构搜索速度,研究工作包括:
    T. Domhan, J. T. Springenberg, and F. Hutter. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015.
  • 结合超参数和部分学习曲线预测给定网络架构的最终性能,研究工作包括:
    Kevin Swersky, Jasper Snoek, and Ryan Prescott Adams. Freeze-thaw bayesian optimization. 2014.
    A. Klein, S. Falkner, J. T. Springenberg, and F. Hutter. Learning curve prediction with Bayesian neural networks. In International Conference on Learning Representations, 2017a.
    Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating Neural Architecture Search using Performance Prediction. In NIPS Workshop on Meta-Learning, 2017b.
    Aditya Rawal and Risto Miikkulainen. From Nodes to Networks: Evolving Recurrent Neural Networks. In arXiv:1803.04439, March 2018.
  • 外推法之上:通过训练代理模型,利用已经训练处的代理模型整体架构与计算单元的性能外推更大尺寸的网络架构、更多类型计算单元在测试集上的表现,研究工作包括:
    PNAS:Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive Neural Architecture Search. In European Conference on Computer Vision, 2018a.

4.4 权值继承(Weight Inheritance)与网络态射(Network Morphisms)

数学上,态射是一个重要的概念,In mathematics, particularly in category theory, a morphism is a structure-preserving map from one mathematical structure to another one of the same type. The notion of morphism recurs in much of contemporary mathematics. In set theory, morphisms are functions; in linear algebra, linear transformations; in group theory, group homomorphisms; in topology, continuous functions, and so on.

网络态射的基本思想由Wei等人提出:
Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. Network morphism. In International Conference on Machine Learning, 2016. introduction slides

基于网络态射理论,允许自网络不用从头开始训练模型,而是通过继承父模型的权重来进行初始化,使用权值继承方法进行NAS可以使计算量降低至数个GPU days:
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Simple And Efficient Architecture Search for Convolutional Neural Networks. In NIPS Workshop on Meta-Learning, 2017.
Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In Association for the Advancement of Artificial Intelligence, 2018a.
Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network Transformation for Efficient Architecture Search. In International Conference on Machine Learning, June 2018b.

但严格的网络态射会使网络尺寸无限增大,最终导致网络过于复杂,下文提出了引入收缩机制的近似态射缓解了该影响:
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In International Conference on Learning Representations, 2019.

4.5 One-Shot Models 与权值共享(Weight Sharing)

  • 抽象定义:One-Shot Architecture Search treats all architectures as different subgraphs of a supergraph (the one-shot model) and shares weights between architectures that have edges of this supergraph in common.

    图4 左图为one-shot模型,又称超图(supergrah),右图为子网络,又称子图(subgraph)

  • 当前one-shot模型的假设:The one-shot model typically incurs a large bias as it underestimates the actual performance of the best architectures severely; nevertheless, it allows ranking architectures, which would be sufficient if the estimated performance correlates strongly with the actual performance.
    Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4053–4061. Curran Associates, Inc., 2016.
    Andrew Brock, Theodore Lim, James M. Ritchie, and Nick Weston. SMASH: one-shot model architecture search through hypernetworks. In NIPS Workshop on Meta-Learning, 2017.
    ENAS : Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In International Conference on Machine Learning, 2018.
    Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, 2018.
    Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, 2018.
    Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.
    DARTS : Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019b.
    SNAS : Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. SNAS: stochastic neural architecture search. In International Conference on Learning Representations, 2019.

  • one-shot理论的不足: 相对性能排序是否准确并没有准确的结论。
    Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, 2018.
    Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. Evaluating the search phase of neural architecture search. arXiv preprint, 2019.

  • one-shot分支:不同的one-shot NAS方法区别主要在于该 one-shot model 的训练方式:
    ENAS: The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss.
    DARTS: optimizes all weights of the one-shot model jointly with a continuous relaxation of the search space, obtained by placing a mixture of candidate operations on each edge of the one-shot model.
    SNAS
    ProxylessNAS

Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, 2018.

5. Future Directions

Neural Architecture Search: A survey相关推荐

  1. [NAS]Neural Architecture Search: A Survey

    Neural Architecture Search: A Survey Abstract 近年来深度学习在输多领域取得了惊人的成绩,比如图像识别.语音识别.机器翻译等.深度学习领域一大主要研究方向就 ...

  2. 神经架构搜索(Neural Architecture Search,NAS)介绍

    神经架构搜索Neural Architecture Search,NAS介绍 Introduction Intractable Search Space Non transferable optima ...

  3. 【读点论文】FBNetV2:Differentiable Neural Architecture Search for Spatial and Channel D扩大搜索空间,复用featuremap

    FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions Abstract 可微分神经 ...

  4. 自学网络结构(一):Neural Architecture Search With Reinforcement Learning

    论文:Neural Architecture Search With Reinforcement Learning 链接:https://arxiv.org/abs/1611.01578 代码链接:h ...

  5. [RelativeNAS] Relative Neural Architecture Search via Slow-Fast Learning

    Relative Neural Architecture Search via Slow-Fast Learning First author:Tan Hao [PDF] NAS: Neural Ar ...

  6. 神经网络架构搜索(Neural Architecture Search)杂谈

    一.背景 机器学习从业者被戏称为"调参工"已经不是一天两天了.我们知道,机器学习算法的效果好坏不仅取决于参数,而且很大程度上取决于各种超参数.有些paper的结果很难重现原因之一就 ...

  7. 【读点论文】MnasNet: Platform-Aware Neural Architecture Search for Mobile,用神经网络搜索的方式来设计网络平衡精度与速度

    MnasNet: Platform-Aware Neural Architecture Search for Mobile Abstract 为移动设备设计卷积神经网络(CNN)模型具有挑战性,因为移 ...

  8. 神经网络架构搜索(Neural Architecture Search, NAS)笔记

    目录 (一)背景 (二)NAS流程 2.1 定义搜索空间 2.2 搜索策略 (三)加速 (四)变体及扩展 4.1 扩展到其他任务 4.2 扩展到其他超参数 (一)背景 机器学习从业者被戏称为" ...

  9. 渐进式神经网络结构搜索技术(Progressive Neural Architecture Search)(2018年最强最智能的图像分类)详解

    转发地址为:https://yq.aliyun.com/articles/622265?spm=a2c4e.11155472.0.0.402c3fa6VbJvBH 神经网络结构搜索是谷歌的AutoML ...

最新文章

  1. jieba中文分词源码分析(四)
  2. mysql从某表中查询数据插入到另一表的处理
  3. Xmind settings lower
  4. Machine Learning之Python篇(一)
  5. c++分治法求最大最小值实现_最优化计算与matlab实现(12)——非线性最小二乘优化问题——G-N法...
  6. Google开发者模式调试css样式的方法
  7. Bootstrap 禁用导航链接
  8. mysql调换数据_mysql互换表中两列数据方法
  9. ssis sql_如何在SSIS中使用SQL随机数
  10. 拓端tecdat|R语言分层线性模型案例
  11. 嵌入式软件工程师岗位笔试、面试题(1)
  12. Android wear浏览器,手表浏览器下载-智能手表浏览器(Wear Internet Browser)下载 1.0beta1官方版_5577安卓网...
  13. 软件评测师备考经验分享
  14. IDA pdb 自动下载
  15. python里面and和or用法
  16. 学生选课信息管理系统(可行性分析报告)
  17. 学习python记录——第五天
  18. 后台管理 vue+element 开源框架
  19. sql及DDL语句操作
  20. 谈谈区块链入门技能(二):以太坊区块链浏览器如何使用?

热门文章

  1. 三菱FX1s与3台台达MS300变频器通讯程序
  2. Win7下ultraEdit等正确显示日文,韩文等字体的方法
  3. codesoft6卸载干净_CODESOFT创建和删除自定义文档备料
  4. ERROR 1007 (HY000) at line 29: Can‘t create database ‘qz‘; database exists
  5. 设计模式——简单工厂设计模式
  6. 支付宝开放平台 设置应用公钥
  7. centos异常,报错:Unmount and run xfs_repair
  8. “ 我在这个行业的时间比你们的年纪都大”面对质疑,他们还是要实现清洁能源的自由流通与交易
  9. 改变视频剪辑的播放速度
  10. 中国式报表——润乾介绍信实现