从加速和压缩本身来说,两者不是同一件事,但通常情况下我们往往会同时做加速和压缩,两者都会给网络的计算带来收益,所以我们习惯将它们放在一起来讲。

低秩近似(low-rank Approximation),网络剪枝(network pruning),网络量化(network quantization),知识蒸馏(knowledge distillation)和紧凑网络设计(compact Network design)

低秩近似(low-rank Approximation)

网络剪枝(network pruning)

网络量化(network quantization)

紧凑网络设计(compact Network design)

知识蒸馏(knowledge distillation)

  • 2011-JMLR-Learning with Structured Sparsity
  • 2013-NIPS-Predicting Parameters in Deep Learning
  • 2014-BMVC-Speeding up convolutional neural networks with low rank expansions
  • 2014-NIPS-Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
  • 2014-NIPS-Do deep neural nets really need to be deep
  • 2015-ICML-Compressing neural networks with the hashing trick
  • 2015-INTERSPEECH-A Diversity-Penalizing Ensemble Training Method for Deep Learning
  • 2015-BMVC-Data-free parameter pruning for deep neural networks
  • 2015-NIPS-Learning both Weights and Connections for Efficient Neural Network
  • 2015_NIPSw-Distilling Intractable Generative Models
  • 2015-CVPR-Learning to generate chairs with convolutional neural networks
  • 2015-CVPR-Understanding deep image representations by inverting them [2016 IJCV version: Visualizing deep convolutional neural networks using natural pre-images]
  • 2015-CVPR-Efficient and Accurate Approximations of Nonlinear Convolutional Networks [2016 TPAMI version: Accelerating Very Deep Convolutional Networks for Classification and Detection]
  • 2016-ICLRb-Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
  • 2016-ICLR-All you need is a good init [Code]
  • 2016-ICLR-Convolutional neural networks with low-rank regularization
  • 2016-ICLR-Diversity networks
  • 2016-EMNLP-Sequence-Level Knowledge Distillation
  • 2016-CVPR-Inverting Visual Representations with Convolutional Networks
  • 2016-NIPS-Learning Structured Sparsity in Deep Neural Networks
  • 2016-NIPS-Dynamic Network Surgery for Efficient DNNs
  • 2016.10-Deep model compression: Distilling knowledge from noisy teachers
  • 2017-ICLR-Pruning Convolutional Neural Networks for Resource Efficient Inference
  • 2017-ICLR-Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
  • 2017-ICLR-Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
  • 2017-ICML-Variational dropout sparsifies deep neural networks
  • 2017-CVPR-Learning deep CNN denoiser prior for image restoration
  • 2017-CVPR-Deep roots: Improving cnn efficiency with hierarchical filter groups
  • 2017-CVPR-All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation
  • 2017-CVPR-ResNeXt-Aggregated Residual Transformations for Deep Neural Networks
  • 2017-CVPR-Xception: Deep learning with depthwise separable convolutions
  • 2017-ICCV-Channel pruning for accelerating very deep neural networks [Code]
  • 2017-ICCV-Learning efficient convolutional networks through network slimming [Code]
  • 2017-ICCV-ThiNet: A filter level pruning method for deep neural network compression [Project]
  • 2017-ICCV-Interleaved group convolutions
  • 2017-NIPS-Net-trim: Convex pruning of deep neural networks with performance guarantee
  • 2017-NIPS-Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
  • 2017-NNs-Nonredundant sparse feature extraction using autoencoders with receptive fields clustering
  • 2017.02-The Power of Sparsity in Convolutional Neural Networks
  • 2018-AAAI-Auto-balanced Filter Pruning for Efficient Convolutional Neural Networks
  • 2018-AAAI-Deep Neural Network Compression with Single and Multiple Level Quantization
  • 2018-ICML-On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
  • 2018-ICMLw-Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures
  • 2018-ICLRo-Training and Inference with Integers in Deep Neural Networks
  • 2018-ICLR-Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers
  • 2018-ICLR-N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning
  • 2018-ICLR-Model compression via distillation and quantization
  • 2018-ICLR-Towards Image Understanding from Deep Compression Without Decoding
  • 2018-ICLR-Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
  • 2018-ICLR-Mixed Precision Training of Convolutional Neural Networks using Integer Operations
  • 2018-ICLR-Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
  • 2018-ICLR-Loss-aware Weight Quantization of Deep Networks
  • 2018-ICLR-Alternating Multi-bit Quantization for Recurrent Neural Networks
  • 2018-ICLR-Adaptive Quantization of Neural Networks
  • 2018-ICLR-Variational Network Quantization
  • 2018-ICLR-Learning Sparse Neural Networks through L0 Regularization
  • 2018-ICLRw-To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression (Similar topic: 2018-NIPSw-nip in the bud, 2018-NIPSw-rethink)
  • 2018-ICLRw-Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers
  • 2018-ICLRw-Weightless: Lossy weight encoding for deep neural network compression
  • 2018-ICLRw-Variance-based Gradient Compression for Efficient Distributed Deep Learning
  • 2018-ICLRw-Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks
  • 2018-ICLRw-Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks
  • 2018-ICLRw-Accelerating Neural Architecture Search using Performance Prediction
  • 2018-ICLRw-Nonlinear Acceleration of CNNs
  • 2018-CVPR-Context-Aware Deep Feature Compression for High-Speed Visual Tracking
  • 2018-CVPR-NISP: Pruning Networks using Neuron Importance Score Propagation
  • 2018-CVPR-“Learning-Compression” Algorithms for Neural Net Pruning
  • 2018-CVPR-Deep Image Prior [Code]
  • 2018-CVPR-Condensenet: An efficient densenet using learned group convolutions
  • 2018-CVPR-Shift: A zero flop, zero parameter alternative to spatial convolutions
  • 2018-CVPR-Interleaved structured sparse convolutional neural networks
  • 2018-IJCAI-Efficient DNN Neuron Pruning by Minimizing Layer-wise Nonlinear Reconstruction Error
  • 2018-IJCAI-Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
  • 2018-IJCAI-Where to Prune: Using LSTM to Guide End-to-end Pruning
  • 2018-IJCAI-Accelerating Convolutional Networks via Global & Dynamic Filter Pruning
  • 2018-IJCAI-Optimization based Layer-wise Magnitude-based Pruning for DNN Compression
  • 2018-IJCAI-Progressive Blockwise Knowledge Distillation for Neural Network Acceleration
  • 2018-IJCAI-Complementary Binary Quantization for Joint Multiple Indexing
  • 2018-ECCV-A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
  • 2018-ECCV-Coreset-Based Neural Network Compression
  • 2018-ECCV-Data-Driven Sparse Structure Selection for Deep Neural Networks [Code]
  • 2018-BMVCo-Structured Probabilistic Pruning for Convolutional Neural Network Acceleration
  • 2018-BMVC-Efficient Progressive Neural Architecture Search
  • 2018-BMVC-Igcv3: Interleaved lowrank group convolutions for efficient deep neural networks
  • 2018-NIPS-Discrimination-aware Channel Pruning for Deep Neural Networks
  • 2018-NIPS-Frequency-Domain Dynamic Pruning for Convolutional Neural Networks
  • 2018-NIPS-ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
  • 2018-NIPS-DropBlock: A regularization method for convolutional networks
  • 2018-NIPS-Constructing fast network through deconstruction of convolution
  • 2018-NIPS-Learning Versatile Filters for Efficient Convolutional Neural Networks [Code]
  • 2018-NIPSw-Pruning neural networks: is it time to nip it in the bud?
  • 2018-NIPSwb-Rethinking the Value of Network Pruning [2019 ICLR version]
  • 2018-NIPSw-Structured Pruning for Efficient ConvNets via Incremental Regularization
  • 2018.05-Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints
  • 2018.05-AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference
  • 2018.11-Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs
  • 2018.11-Rethinking ImageNet Pre-training (Kaiming He)
  • 2019-ICLRo-The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
  • 2019-AAAIo-A layer decomposition-recomposition framework for neuron pruning towards accurate lightweight networks
  • 2019-CVPR-All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification
  • 2019-CVPR-HetConv Heterogeneous Kernel-Based Convolutions for Deep CNNs
  • 2019-CVPR-Fully Learnable Group Convolution for Acceleration of Deep Neural Networks
  • 2019-CVPR-Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
  • 2019-CVPR-Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure
  • 2019-BigComp-Towards Robust Compressed Convolutional Neural Networks
  • 2019-PR-Filter-in-Filter: Improve CNNs in a Low-cost Way by Sharing Parameters among the Sub-filters of a Filter
  • 2019-PRL-BDNN: Binary Convolution Neural Networks for Fast Object Detection
  • 2019.03-MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning (Face++)
  • 2019.04-Data-Free Learning of Student Networks (Huawei)
  • 2019.04-Resource Efficient 3D Convolutional Neural Networks
  • 2019.04-Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks
  • 2019.04-Knowledge Squeezed Adversarial Network Compression- 2011-JMLR-Learning with Structured Sparsity
  • 2013-NIPS-Predicting Parameters in Deep Learning
  • 2014-BMVC-Speeding up convolutional neural networks with low rank expansions
  • 2014-NIPS-Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
  • 2014-NIPS-Do deep neural nets really need to be deep
  • 2015-ICML-Compressing neural networks with the hashing trick
  • 2015-INTERSPEECH-A Diversity-Penalizing Ensemble Training Method for Deep Learning
  • 2015-BMVC-Data-free parameter pruning for deep neural networks
  • 2015-NIPS-Learning both Weights and Connections for Efficient Neural Network
  • 2015_NIPSw-Distilling Intractable Generative Models
  • 2015-CVPR-Learning to generate chairs with convolutional neural networks
  • 2015-CVPR-Understanding deep image representations by inverting them [2016 IJCV version: Visualizing deep convolutional neural networks using natural pre-images]
  • 2015-CVPR-Efficient and Accurate Approximations of Nonlinear Convolutional Networks [2016 TPAMI version: Accelerating Very Deep Convolutional Networks for Classification and Detection]
  • 2016-ICLRb-Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
  • 2016-ICLR-All you need is a good init [Code]
  • 2016-ICLR-Convolutional neural networks with low-rank regularization
  • 2016-ICLR-Diversity networks
  • 2016-EMNLP-Sequence-Level Knowledge Distillation
  • 2016-CVPR-Inverting Visual Representations with Convolutional Networks
  • 2016-NIPS-Learning Structured Sparsity in Deep Neural Networks
  • 2016-NIPS-Dynamic Network Surgery for Efficient DNNs
  • 2016.10-Deep model compression: Distilling knowledge from noisy teachers
  • 2017-ICLR-Pruning Convolutional Neural Networks for Resource Efficient Inference
  • 2017-ICLR-Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
  • 2017-ICLR-Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
  • 2017-ICML-Variational dropout sparsifies deep neural networks
  • 2017-CVPR-Learning deep CNN denoiser prior for image restoration
  • 2017-CVPR-Deep roots: Improving cnn efficiency with hierarchical filter groups
  • 2017-CVPR-All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation
  • 2017-CVPR-ResNeXt-Aggregated Residual Transformations for Deep Neural Networks
  • 2017-CVPR-Xception: Deep learning with depthwise separable convolutions
  • 2017-ICCV-Channel pruning for accelerating very deep neural networks [Code]
  • 2017-ICCV-Learning efficient convolutional networks through network slimming [Code]
  • 2017-ICCV-ThiNet: A filter level pruning method for deep neural network compression [Project]
  • 2017-ICCV-Interleaved group convolutions
  • 2017-NIPS-Net-trim: Convex pruning of deep neural networks with performance guarantee
  • 2017-NIPS-Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
  • 2017-NNs-Nonredundant sparse feature extraction using autoencoders with receptive fields clustering
  • 2017.02-The Power of Sparsity in Convolutional Neural Networks
  • 2018-AAAI-Auto-balanced Filter Pruning for Efficient Convolutional Neural Networks
  • 2018-AAAI-Deep Neural Network Compression with Single and Multiple Level Quantization
  • 2018-ICML-On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
  • 2018-ICMLw-Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures
  • 2018-ICLRo-Training and Inference with Integers in Deep Neural Networks
  • 2018-ICLR-Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers
  • 2018-ICLR-N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning
  • 2018-ICLR-Model compression via distillation and quantization
  • 2018-ICLR-Towards Image Understanding from Deep Compression Without Decoding
  • 2018-ICLR-Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
  • 2018-ICLR-Mixed Precision Training of Convolutional Neural Networks using Integer Operations
  • 2018-ICLR-Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
  • 2018-ICLR-Loss-aware Weight Quantization of Deep Networks
  • 2018-ICLR-Alternating Multi-bit Quantization for Recurrent Neural Networks
  • 2018-ICLR-Adaptive Quantization of Neural Networks
  • 2018-ICLR-Variational Network Quantization
  • 2018-ICLR-Learning Sparse Neural Networks through L0 Regularization
  • 2018-ICLRw-To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression (Similar topic: 2018-NIPSw-nip in the bud, 2018-NIPSw-rethink)
  • 2018-ICLRw-Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers
  • 2018-ICLRw-Weightless: Lossy weight encoding for deep neural network compression
  • 2018-ICLRw-Variance-based Gradient Compression for Efficient Distributed Deep Learning
  • 2018-ICLRw-Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks
  • 2018-ICLRw-Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks
  • 2018-ICLRw-Accelerating Neural Architecture Search using Performance Prediction
  • 2018-ICLRw-Nonlinear Acceleration of CNNs
  • 2018-CVPR-Context-Aware Deep Feature Compression for High-Speed Visual Tracking
  • 2018-CVPR-NISP: Pruning Networks using Neuron Importance Score Propagation
  • 2018-CVPR-“Learning-Compression” Algorithms for Neural Net Pruning
  • 2018-CVPR-Deep Image Prior [Code]
  • 2018-CVPR-Condensenet: An efficient densenet using learned group convolutions
  • 2018-CVPR-Shift: A zero flop, zero parameter alternative to spatial convolutions
  • 2018-CVPR-Interleaved structured sparse convolutional neural networks
  • 2018-IJCAI-Efficient DNN Neuron Pruning by Minimizing Layer-wise Nonlinear Reconstruction Error
  • 2018-IJCAI-Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
  • 2018-IJCAI-Where to Prune: Using LSTM to Guide End-to-end Pruning
  • 2018-IJCAI-Accelerating Convolutional Networks via Global & Dynamic Filter Pruning
  • 2018-IJCAI-Optimization based Layer-wise Magnitude-based Pruning for DNN Compression
  • 2018-IJCAI-Progressive Blockwise Knowledge Distillation for Neural Network Acceleration
  • 2018-IJCAI-Complementary Binary Quantization for Joint Multiple Indexing
  • 2018-ECCV-A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
  • 2018-ECCV-Coreset-Based Neural Network Compression
  • 2018-ECCV-Data-Driven Sparse Structure Selection for Deep Neural Networks [Code]
  • 2018-BMVCo-Structured Probabilistic Pruning for Convolutional Neural Network Acceleration
  • 2018-BMVC-Efficient Progressive Neural Architecture Search
  • 2018-BMVC-Igcv3: Interleaved lowrank group convolutions for efficient deep neural networks
  • 2018-NIPS-Discrimination-aware Channel Pruning for Deep Neural Networks
  • 2018-NIPS-Frequency-Domain Dynamic Pruning for Convolutional Neural Networks
  • 2018-NIPS-ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
  • 2018-NIPS-DropBlock: A regularization method for convolutional networks
  • 2018-NIPS-Constructing fast network through deconstruction of convolution
  • 2018-NIPS-Learning Versatile Filters for Efficient Convolutional Neural Networks [Code]
  • 2018-NIPSw-Pruning neural networks: is it time to nip it in the bud?
  • 2018-NIPSwb-Rethinking the Value of Network Pruning [2019 ICLR version]
  • 2018-NIPSw-Structured Pruning for Efficient ConvNets via Incremental Regularization
  • 2018.05-Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints
  • 2018.05-AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference
  • 2018.11-Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs
  • 2018.11-Rethinking ImageNet Pre-training (Kaiming He)
  • 2019-ICLRo-The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
  • 2019-AAAIo-A layer decomposition-recomposition framework for neuron pruning towards accurate lightweight networks
  • 2019-CVPR-All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification
  • 2019-CVPR-HetConv Heterogeneous Kernel-Based Convolutions for Deep CNNs
  • 2019-CVPR-Fully Learnable Group Convolution for Acceleration of Deep Neural Networks
  • 2019-CVPR-Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
  • 2019-CVPR-Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure
  • 2019-BigComp-Towards Robust Compressed Convolutional Neural Networks
  • 2019-PR-Filter-in-Filter: Improve CNNs in a Low-cost Way by Sharing Parameters among the Sub-filters of a Filter
  • 2019-PRL-BDNN: Binary Convolution Neural Networks for Fast Object Detection
  • 2019.03-MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning (Face++)
  • 2019.04-Data-Free Learning of Student Networks (Huawei)
  • 2019.04-Resource Efficient 3D Convolutional Neural Networks
  • 2019.04-Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks
  • 2019.04-Knowledge Squeezed Adversarial Network Compression

网络加速和压缩技术论文整理相关推荐

  1. 让机器“删繁就简”:深度神经网络加速与压缩|VALSE2018之六

    编者按:郑板桥在<赠君谋父子>一诗中曾写道, "删繁就简三秋树:领异标新二月花." 这句诗讲的是,在画作最易流于枝蔓的兰竹时,要去掉其繁杂使之趋于简明如"三秋 ...

  2. 【一文看懂】深度神经网络加速和压缩新进展年度报告

    郑板桥在<赠君谋父子>一诗中曾写道, "删繁就简三秋树,领异标新二月花." 这句诗讲的是,在画作最易流于枝蔓的兰竹时,要去掉其繁杂使之趋于简明如"三秋之树&q ...

  3. 【ELT.ZIP】OpenHarmony啃论文俱乐部——点燃主缓存压缩技术火花

    本文出自ELT.ZIP团队,ELT<=>Elite(精英),.ZIP为压缩格式,ELT.ZIP即压缩精英. 成员: 上海工程技术大学大二在校生 合肥师范学院大二在校生 清华大学大二在校生 ...

  4. 智能网卡的网络加速技术

    2021年9月25日,由"科创中国"未来网络专业科技服务团指导,江苏省未来网络创新研究院.网络通信与安全紫金山实验室联合主办.SDNLAB社区承办的2021中国智能网卡研讨会中,多 ...

  5. 常见网络加速技术浅谈(二)

    上一篇我们看了一些网络加速的技术,它们致力于减少处理每个网络数据包所需要的CPU时间,要么是把一些网络协议的运算卸载(offload)到网卡,要么是减少数据拷贝对CPU资源的消耗.在上一篇我们说过,网 ...

  6. 计算机三级网络技术注意事项,2015计算机三级考试《网络技术》复习要点:压缩技术...

    2015计算机三级考试<网络技术>复习要点:压缩技术 1.多媒体的数据量 多媒体计算机能够处理图形.图像.音频.视频. 2.压缩的基础是数据冗余 多媒体信息存在许多数据冗余,这为数据压缩创 ...

  7. 网络加速技术浅析(二)

    上一篇我们看了一些网络加速的技术,它们致力于减少处理每个网络数据包所需要的CPU时间,要么是把一些网络协议的运算卸载(offload)到网卡,要么是减少数据拷贝对CPU资源的消耗.在上一篇我们说过,网 ...

  8. 常见网络加速技术浅谈(一)

    TCP/IP协议栈简介 当用户需要向网络发送数据的时候,用户实际上是通过应用程序来完成这项工作.应用程序向一个描述了对端连接的文件描述符(File Description)写数据. 之后位于操作系统内 ...

  9. 网络加速_神奇黑科技出现:双WiFi网络加速技术

    对于爱玩手游的用户来说,可能经常要面临的一个问题是:房间面积太大,在不同房间的Wi-Fi信号强度不一样,这就导致想要好好的玩游戏,就要固定在某个房间,甚至还有不少人特意安装了两个路由器. 今天,iQO ...

最新文章

  1. 【redis】2.redis可视化工具安装使用
  2. 调查显示:中国医生乐于以新媒体为途径普及健康科普信息
  3. linux下将硬件时钟调整为与本地时钟一致
  4. python爬虫获取的网页数据为什么要加[0-Python爬虫实战1-解决需要爬取网页N秒后的内容的需求...
  5. [转载]每个极客都应该知道的Linux技巧
  6. 牛客国庆集训派对Day2 F、平衡二叉树 【构造+记忆化搜索】
  7. 另一个程序正在使用此文件 进程无法访问 iis
  8. usockets / 编译 usockets 过程说明
  9. 遗传算法学习笔记(一):常用的选择策略
  10. 对一道SQL语句题目的再思考
  11. linux远程日志rsyslog服务端和客户端安装(亲测)--自定义接收日志格式
  12. express日常开发总结
  13. centos7抢先安装docker1.0
  14. PYTHON2.day14
  15. 学生计算机游戏代码,给计算机学院的学幼们贴一些游戏代码
  16. 专访阿里云 RocketMQ 团队:现代微服务架构需要新的消息系统
  17. java 必须是数字_[Java教程]限制只能输入数字
  18. 同一服务器下,不同网站登录同步
  19. 使用Diskpart磁盘管理中的clean命令,误删除了移动硬盘分区后,找回分区并恢复数据方法。(U盘启动盘)
  20. 荣耀猎人游戏本有何优势?一机多用你值得体验

热门文章

  1. PS快速把人物图片转为铅笔素描
  2. 计算机家庭组共享的打印机,设置Win7家庭组 一台打印机全家都能用
  3. 小米新布局已定,只等下一个风口
  4. 复数的除法(老是忘记,温故用)
  5. 如何让图片自适应不同屏幕宽度,并居中显示。
  6. 使用 Python 发送短信?
  7. Microsoft Visual Studio International Pack 1.0 处理汉字、拼音、笔画转换
  8. php 图片压缩 保留exif,Android Bitmap小技巧 - 压缩时保留图片的Exif信息
  9. 步态剪影_50个免费高质量剪影集
  10. Permission denied(publickey)的解决办法:github/gitlab仓库与本地关联