Deep learning

Yann LeCun, Yoshua Bengio& Geoffrey Hinton

深度学习

Yann LeCun, Yoshua Bengio& Geoffrey Hinton

Abstract

   Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

摘要

  深度学习允许由多个处理层组成的计算模型学习具有多个抽象级别的数据。这些方法极大地促进了语音识别、视觉对象识别、目标检测以及药物发现和基因组学等许多领域的发展。深度学习通过使用反向传播算法来发现大型数据集中的复杂结构,以控制机器应如何更改用于计算每层值的内部参数,这些参数用于计算前一层的值。深度卷积网络在处理图像、视频、语音和音频方面带来了突破,而递归网络则为文本和语音等序列数据带来了光明。

正文

  Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users’ interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning.   Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input.   Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure.

  Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition and speech recognition, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules, analysing particle accelerator data, reconstructing brain circuits, and predicting the effects of mutations in non-coding DNA on gene expression and disease. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding, particularly topic classification, sentiment analysis, question answering and language translation.
  We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress.

  机器学习技术为现代社会的许多方面提供动力:从网络搜索到社交网络上的内容过滤,再到电子商务网站上的推荐,它越来越多地应用于照相机和智能手机等消费品中。机器学习系统用于识别图像中的对象,将语音转录成文本,将新闻、帖子或产品与用户的兴趣相匹配,并选择相关的内容作为搜索结果。深度学习的技术越来越多地被使用于这些应用程序。
  传统的机器学习技术在处理未进行加工原始的自然数据方面受到限制。几十年来,构建一个模式识别或机器学习系统需要仔细的工程设计和相当多的领域专业知识来设计一个特征提取器,将原始数据(如图像的像素值)转换成合适的内部表示或特征向量,从中获得一学习子系统,通常是一个分类器,可以检测或分类输入的类别。
  表示学习是一组允许机器接收原始数据并自动发现检测或分类所需的表示的方法。深度学习方法是通过组合简单但非线性的模块获得的具有多个表示层次的表示学习方法,每个模块将一个级别的表示(从原始输入开始)转换为更高层、更抽象的表示。通过组合足够多的这种变换,可以学习非常复杂的函数。对于分类任务,更高层次的表示放大了输入中对区分更重要的方面,并抑制了不相关的变化。例如,图像以像素值阵列的形式出现,并且第一表示层中的学习特征通常表示图像中特定方向和位置处的边的存在或不存在。第二层通常通过检测边缘的特定排列来检测模体,而不考虑边缘位置的微小变化。第三层可以将模体组合成与熟悉对象的部分相对应的更大的组合,并且随后的层将检测作为这些部分的组合的对象。深度学习的关键在于这些特征层不是由人类工程师设计的:它们是通过通用学习过程从数据中学习的。
  深度学习在解决多年来一直困扰人工智能界的问题方面取得了重大进展。事实证明,它非常善于发现高维数据中复杂的结构,因此适用于科学、商业和政府的许多领域。除了打破在图像识别和语音识别方面的记录外,它在预测潜在药物分子的活性、分析粒子加速器数据、重建脑回路、预测非编码DNA突变对基因表达的影响等方面也胜过其他机器学习技术。也许更令人惊讶的是,深度学习在自然语言理解的各种任务中产生了非常优秀的结果,尤其是主题分类、情感分析、问题回答和语言翻译。
  我们认为深度学习将在不久的将来取得更多的成功,因为它只需要很少的手工工程,所以它可以很容易地适应可用计算量和数据量的增加。目前正在为深层神经网络开发的新的学习算法和结构的行为将会加速这一进展。

Supervised learning

监督学习

  The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as ‘knobs’ that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine.
  To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector.
  The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average.

  无论深度与否,机器学习最常见的形式都是监督学习。比如我们想要建立一个系统,可以将图像分类为房子、汽车、人或宠物。我们首先收集了一个房子,汽车,人和宠物的大数据集,每一条数据都有其分类的标签。在训练过程中,机器会显示一个图像并以分数向量的形式生成输出,每个类别对应一个分数。我们希望理想的类别在所有类别中得分最高,但这不可能在训练前就实现。所以通过计算一个目标函数来度量输出分数和期望的分数模式之间的误差(或距离)。然后机器修改其内部可调参数以减少此误差。这些可调参数,通常称为权重,是一个实数,可以看作是定义机器输入输出功能的“旋钮”。在一个典型的深度学习系统中,可能有数以亿计的可调权重,以及数以亿计的用于训练机器的标签示例。
  为了正确地调整权重向量,学习算法会计算一个梯度向量,然后在与梯度向量相反的方向上调整权重向量。对于每个权重而言,梯度向量表明如果该权重稍微增加一点,误差会增加或减少多少。
  目标函数是对所有训练示例的误差求平均,可以看作是权重值高维空间中的一种丘陵景观。负梯度向量表示了该景观中最陡下降的方向,使其更接近最小值,即输出误差平均较低的地方。

  In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training.
  Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category.

  在实践中,大多数实践者选择使用随机梯度下降(SGD)的方法。此方法包括显示几个例子的输入向量,计算输出和误差,计算这些例子的平均梯度,并相应地调整权重。从训练集中的许多小样本重复这个过程,直到目标函数的平均值停止下降。之所以称之为随机性,是因为每个小样本集都给出了所有样本的平均梯度的噪声估计。与更复杂的优化技术相比,这个简单的过程通常会很快地找到一组好的权重。在训练之后,系统的性能将在测试集的不同示例上进行测试。这有助于测试机器的泛化能力,即它对新输入产生合理答案的能力,而这些新输入是它在训练中从未见过的。
  目前机器学习的许多实际应用是在手工设计的特征之上使用线性分类器。二分类线性分类器计算特征向量分量的加权和。如果加权和高于阈值,则输入被归类为属于特定类别。

  Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other ‘shallow’ classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning.
  A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects.

  20世纪60年代我们就知道线性分类器只能将其输入空间分割成非常简单的区域,即由超平面分隔的半空间。但图像和语音识别等问题,需要考虑对输入输出函数无关的输入的变化,如位置的变化,或音调的变化或口音的变化,同时对特定的微小变化非常敏感(例如,一只白色的狼和狼一样的白狗萨莫耶)。在像素级上,两只萨摩耶在不同的姿势和不同的环境下的图像可能会非常不同,而两只萨摩耶和狼在相同的位置和相似的背景下的图像可能会非常相似。线性分类器,或任何其他“浅”分类器操作的原始像素不可能区分后两者,而把前两者分类在同一类别。这就是为什么浅层分类器需要一个很好的特征抽取器来解决选择性-不变性的困境-一个能够产生对图像的某些方面有选择性的表示,而这些方面对于不相关的方面(如动物的姿势)是不变的。为了使分类器更强大,我们可以使用泛型非线性特征,如核方法,但泛型特征(如高斯核产生的特征)不允许学习者在远离训练示例的情况下很好地泛化。传统的选择是手工设计好的特征提取器,这需要大量的工程技能和领域专业知识。但是,如果可以使用通用的学习过程自动学习好的特征,那么这些都可以避免。这是深度学习的关键优势。
  深度学习体系结构是由简单模块组成的多层堆栈,所有模块(或大部分模块)都需要学习,其中许多模块计算非线性输入输出映射。堆栈中的每个模块转换其输入,以提高表示的选择性和不变性。有了多个非线性层,比如5到20的深度,一个系统可以实现其输入的极其复杂的功能,这些功能同时对微小的细节敏感——区分萨摩耶和白狼——并且对大的无关变化不敏感,比如背景、姿势、光照和周围的物体。

Backpropagation to train multilayer architectures

反向传播训练多层体系结构

  From the earliest days of pattern recognition, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s.
  The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module.
  Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f ( z ) = m a x ( z , 0 ) f(z) = max(z, 0) f(z)=max(z,0). In past decades, neural nets used smoother non-linearities, such as t a n h ( z ) tanh(z) tanh(z) or 1 / ( 1 + e x p ( − z ) ) 1/(1 + exp(−z)) 1/(1+exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1).

  在模式识别的早期,研究人员的目标是用可训练的多层网络来代替手工设计的特征,但是尽管它很简单,直到20世纪80年代中期才被广泛使用。事实证明,多层结构可以通过简单的随机梯度下降来训练。只要模块是其输入和内部权重的相对平滑函数,就可以使用反向传播程序计算梯度。在20世纪70年代和80年代,几个不同的团体独立地发现了这个方式是可以做到的,并且是有效的。
  用反向传播法计算一个目标函数相对于一个多层模块栈的权重的梯度,不过是导数链规则的一个实际应用。其关键是目标相对于模块输入的导数(或梯度)可以通过从相对于该模块输出(或后续模块的输入)的梯度向后计算(图1)。反向传播方程可重复应用于在所有模块中传播梯度,从顶部的输出(网络产生其预测)一直到底部(外部输入被馈送)。一旦计算了这些梯度,就很容易计算出相对于每个模块权重的梯度。
  深度学习的许多应用使用前馈神经网络架构(图1)来将固定大小的输入(例如,图像)映射到固定大小的输出(例如,几个类别中的每一个的概率)。为了从上一层到下一层,一组单元计算上一层输入的加权和,并将结果传递给一个非线性函数。目前最流行的非线性函数是整流线性单元(ReLU),即半波整流器 f ( z ) = m a x ( z , 0 ) f(z)=max(z,0) f(z)=max(z,0)。在过去的几十年里,神经网络选择过更平滑的非线性函数,例如 t a n h ( z ) tanh(z) tanh(z)或 1 / ( 1 + e x p ( − z ) ) 1/(1+exp(−z)) 1/(1+exp(−z)),但ReLU通常在多层网络中学习得更快,并且允许在无监督预训练的情况下训练深度监督网络。不在输入或输出层的单元通常称为隐藏单元。隐藏层可以被视为以非线性方式改变输入,使得类别可以由最后一层线性分离(图1)。

图1.多层神经网络和反向传播   a.多层神经网络(由连接点表示)可以改变输入空间,使数据类别(红色和蓝色线上的示例)线性可分离。注意输入空间中的规则网格(如左图所示)是如何通过隐藏单元进行变换的(如中间面板所示)。这是一个说明性的例子,只有两个输入单元,两个隐藏单元和一个输出单元,但是用于对象识别或自然语言处理的网络包含成千上万个单元。
  b.导数的链式法则告诉我们两个小效应(x对y的小变化,y对z的小变化)是如何构成的。通过乘以∂y/∂x(即偏导数的定义),首先将x中的小变化Δx转换为y中的小变化Δy。类似地,变化Δy会在z中产生一个变化Δz。将一个方程代入另一个方程就得到了导数的链式规则——Δx是如何通过∂y/∂x和∂z/∂x相乘而变成Δz的。当x、y和z是向量(导数是雅可比矩阵)时,它也起作用。
  c.用于计算神经网络前向传递的方程组,该网络有两个隐藏层和一个输出层,每一层构成一个模块,通过该模块可以反向传播梯度。在每一层,我们首先计算每个单元的总输入z,这是下一层单元输出的加权和。然后将一个非线性函数f(.)应用于z得到单元的输出。为了简单起见,我们省略了偏差项。神经网络中使用的非线性函数包括近年来常用的校正线性单元(ReLU) f ( z ) = m a x ( 0 , z ) f(z)=max(0,z) f(z)=max(0,z),以及更传统的sigmoid,如双曲正切(hyberbolic tangent), f ( z ) = ( e x p ( z ) − e x p ( − z ) ) / ( e x p ( z ) + e x p ( − z ) ) f(z)=(exp(z)−exp(−z))/(exp(z)+exp(−z)) f(z)=(exp(z)−exp(−z))/(exp(z)+exp(−z)) 和 logistic函数,f(z)= 1 / ( 1 + e x p ( − z ) ) 1/(1+exp(−z)) 1/(1+exp(−z))。
  d. 用于计算向后传球的方程式。在每个隐藏层,我们计算每个单元的输出的误差导数,它是相对于上一层单元的总输入的误差导数的加权和。然后我们将输出的误差导数乘以f(z)的梯度,将其转换为关于输入的误差导数。在输出层,通过对成本函数的微分,计算出相对于单位输出的误差导数。如果单位l的成本函数为 0.5 ( y l − t l ) 2 0.5(y^l−t^l)2 0.5(yl−tl)2,则给出 y l − t l y^l−t^l yl−tl,其中tl是目标值。一旦 ∂ E / ∂ z k ∂E/∂z^k ∂E/∂zk已知,下一层中单位j连接处的重量 w j k w_j^k wjk​的误差导数仅为 y j ∂ E / ∂ z k y_j∂E/∂z^k yj​∂E/∂zk。

  In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error.
  In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at.
  Interest in deep feedforward networks was revived around 2006 by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By ‘pre-training’ several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be finetuned using standard backpropagation. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited.
  The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary and was quickly developed to give record-breaking results on a large vocabulary task. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some ‘source’ tasks but very few for some ‘target’ tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets.
  There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet). It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computervision community.

  神经网络和反向传播在20世纪90年代末被机器学习界所抛弃,所以被计算机视觉和语音识别界所忽视。当时人们普遍认为,学习有用的、多阶段的、具有很少先验知识的特征抽取器是不可行的。特别是,人们普遍认为简单的梯度下降会陷入局部最小的情况中,在这种情况下,任何小的变化都会降低平均误差。
  大型网络在实践中很少会出现局部极值问题。无论初始条件如何,系统几乎总是能得到质量非常相似的解。最近的理论和经验结果明确地表明,局部极小值一般不是一个严重的问题。相反,景观由大量坡度为零的鞍点组合而成,而曲面在大多数维度上呈曲线。分析似乎表明,只有几个向下弯曲方向的鞍点大量存在,但几乎所有这些鞍点都具有与目标函数非常相似的值。因此,算法在这些鞍点中的哪一个被卡住并不重要。
  2006年前后,由加拿大高级研究所(CIFAR)召集的一组研究人员重新唤起了人们对深度前馈网络的兴趣。研究人员引入了无监督的学习程序,这种程序可以在不需要标记数据的情况下创建特征检测器层。学习每一层特征检测器的目的是能够重建或模拟下一层特征检测器(或原始输入)的活动。通过使用该重构目标对多个逐步复杂的特征检测器进行“预训练”,可以将深度网络的权值初始化为合理值。最后一层输出单元可以被添加到网络的顶部,整个深系统可以使用标准反向传播进行微调。这对于识别手写数字或检测行人非常有效,尤其是在标签数据量非常有限的情况下。
  这种预训练方法的第一个主要应用是在语音识别中,并且由于快速图形处理单元(GPU)的出现而成为可能,它不仅易于编程,并使研究人员训练网络的速度提高了10到20倍。2009年,该方法被用于将从声波中提取的系数的短时间窗口映射为各种语音片段的概率集,这些语音片段可能由窗口中心的帧表示。它在使用小词汇的标准语音识别基准测试中取得了破纪录的结果,并且很快便继续在一个大词汇量任务中给出了破纪录的结果。到2012年,许多主要的语音组织都在开发2009年的deepnet版本,并且已经部署在Android手机上。对于较小的数据集,无监督的预训练有助于防止过拟合,当标记的示例数量较少时,或者在转移设置时,我们有很多示例用于某些“源”任务,但很少有用于某些“目标”任务,因此可以显著提高泛化能力。一旦深度学习得到恢复,只需要对原小数据集进行预训练。
  然而,有一种特殊类型的深层前馈网络比相邻层之间完全连通的网络更容易训练和推广。这就是卷积神经网络(ConvNet)。它在神经网络不受欢迎的时期取得了许多实际的成功,最近被计算机视觉界广泛采用。

Convolutional neural networks

卷积神经网络

  ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers.
  The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name.

  卷积神经网络被设计用于处理以多个阵列形式出现的数据,例如由三个2D阵列组成的彩色图像,其中包含三个颜色图层的像素强度。许多数据模式以多个阵列的形式存在:1D用于信号和序列,包括语言;2D用于图像或音频频谱图;3D用于视频或体积图像。卷积神经网络利用了自然信号的特性,其背后有四个关键思想:局部连接、共享权重、池和多层的使用。
  典型的ConvNet(图2)的体系结构是由几个阶段构成的。前几个阶段由卷积层和池层这两种类型的层组成。卷积层用于提取单元特征,其中每个单元通过一组称为滤波器组的权重连接到上一层的特征映射中的局部面片。然后,该局部加权和的结果通过一个非线性激活函数,如ReLU。特征图中的所有单元共享同一个过滤器组。一个图层中的不同特征映射使用不同的滤波器组。这种架构有两个原因。首先,在图像等阵列数据中,值的局部组通常高度相关,形成易于检测的独特的局部模体。第二,图像和其他信号的局部统计信息对位置不变性。换言之,如果一个模块可以出现在图像的一个部分,那么它就可以出现在任何地方,因此在不同位置的单元共享相同的权重,并在阵列的不同部分检测相同的模块。从数学上讲,特征映射执行的过滤操作是一个离散卷积,因此得名卷积。

图2.卷积网络的结构   一个典型的卷积网络结构的每一层(水平)的输出应用于萨摩耶的图像(左下;RGB(红、绿、蓝)输入,右下)。每个矩形图像是对应于在每个图像位置检测到的学习特征中的一个的输出的特征映射。信息流自下而上,低层特征作为定向边缘检测器,并为输出的每个图像类计算分数。ReLU,整流线性单元。

  Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained.
  Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance.
  The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey’s inferotemporal cortex. ConvNets have their roots in the neocognitron, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words.
  There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition and document reading. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands, and for face recognition.

  卷积层的作用是检测上一层特征的局部连接,池层的作用是将语义相似的特征合并为一个。由于构成一个模体的特征的相对位置可能有所不同,所以可以通过对每个特征的位置进行粗粒化来检测模体。典型的最大池化,计算一个特征映射(或几个特征映射)中局部最大值。邻近池化从相邻的一行或一列(几行或几列)中获取输入,从而减少了表示的维数,并遵守了对小位移和扭曲的不变性。二或三阶段的卷积,堆叠非线性函数核池化层,之后接更多的卷积层和全连接层。通过卷积网络反向传播梯度就像通过常规的深层网络一样简单,允许训练所有滤波器组中的所有权重。
  深度神经网络利用了许多自然信号都是组合层次的特性,其中高层次的特征是由较低层次的特征组成的。在图像中,边缘的局部组合形成模体,模体组合成部分,部分形成目标。从声音到音素、音素到音节、单词到句子,语音和文本中都存在类似的层次结构。当前一层中的元素在位置和外观上发生变化时,池化表示的变化很小。
  ConvNets中的卷积层和池化层从视觉神经科学中简单细胞和复杂细胞的经典概念中受到启发,整体架构让人想起视觉皮层腹侧通道中的LGN–V1–V2–V4–IT层次结构。当ConvNet模型和猴子看到同一张图片时,ConvNet中高级单位的激活函数解释了猴子颞下皮质160个神经元随机集半方差。ConvNets起源于新认知机,其体系结构有些相似,但没有诸如反向传播这样的端到端监督学习算法。新认知机用一个称为延时神经网络的原始1D ConvNet来识别音素和简单单词。
  早在20世纪90年代初,卷积网络就有了大量的应用,首先是用于语音识别的时延神经网络和文档读取。文档阅读系统使用卷积网络和实现语言约束的概率模型联合训练。到20世纪90年代末,这个系统读取了美国所有支票的10%以上。后来,Microsoft部署了许多基于卷积网络的光学字符识别和手写识别系统。在20世纪90年代早期,卷积网络也被用于自然图像中的目标检测,包括人脸和手,以及人脸识别。

Image understanding with deep convolutional networks

基于深卷积网络的图像理解

  Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition, the segmentation of biological images particularly for connectomics, and the detection of faces, text, pedestrians and human bodies in natural images. A major recent practical success of ConvNets is face recognition.
  Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding and speech recognition.
  Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3).

  自21世纪初以来,ConvNets已成功地应用于图像中目标和区域的检测、分割和识别。这些任务都是标记数据相对丰富的任务,例如交通标志识别、生物图像分割(尤其是connectomics)以及在自然图像中检测人脸、文本、行人和人体。最近ConvNets在人脸识别获得实际成功。
  重要的是,图像可以在像素级标记,这将在技术上有应用,包括自主移动机器人和自动驾驶汽车。像Mobileye和NVIDIA这样的公司正在他们即将推出的汽车视觉系统中使用这种基于ConvNet的方法。卷积网络在自然语言处理和语音识别方面的应用也越来越重要。
  尽管取得了这些成功,ConvNets任然被主流计算机视觉和机器学习社区所抛弃,直到2012年的ImageNet竞赛。当深度卷积网络应用于包含1000个不同类的网络上约100万张图像的数据集时,它们取得了惊人的结果,几乎将以前最佳的方法的错误率减半。这是由于对gpu的高效利用,使用ReLUs,和dropout新正则技术,以及通过增强现有示例来生成更多训练示例。这一成功带来了计算机视觉领域的一场革命;目前,ConvNets已成为几乎所有识别和检测任务的主导方法,并在某些任务中接近人的表现。最近一个演示利用ConvNets和递归网络模块来生成图像标题,令我们非常意外。

图3.图像转文字   由递归神经网络(RNN)生成的字幕,作为额外输入,由深度卷积神经网络(CNN)从测试图像中提取的表示,并训练RNN将图像的高级表示“转换”为字幕(顶部)。当RNN在生成每个单词(粗体)时,它能够将注意力集中在输入图像中的不同位置(中间和底部;较浅的面片被给予更多的关注),我们发现它利用这一点来更好地将图像“翻译”成标题。

  Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours.
  The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services.
  ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars.

  最新的ConvNet架构有10到20层ReLUs,数亿个权重,单元之间有数十亿个连接。两年前,训练如此庞大的网络只需要几周时间,但在硬件、软件和算法并行化方面的进步已将训练时间缩短到几个小时。
  基于ConvNet的系统性能已经引起了包括Google、Facebook、Microsoft、IBM、yahoo等大多数主要技术公司的关注。推特和Adobe,以及数量迅速增长的初创企业,发起部署基于ConvNet的图像理解产品和服务的研发项目。
  ConvNets很容易适应芯片或现场可编程门阵列中的有效硬件实现。NVIDIA、Mobileye、Intel、Qualcomm和Samsung在内的许多公司正在开发ConvNet芯片,以便在智能手机、相机、机器人和自动驾驶汽车中实现实时视觉应用。

Distributed representations and language processing

分布式表示与语言处理

  Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features). Second, composing layers of representation in a deep net brings the potential for another exponential advantage (exponential in the depth).
  The hidden layers of a multilayer neural network learn to represent the network’s inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple ‘micro-rules’. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications.
  The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning.
  Before the introduction of neural language models, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4).

  深度学习理论表明,与不使用分布式表示的经典学习算法相比,深层网络具有两种不同的指数优势。这两个优点都源于组合的能力,并依赖于具有适当组件结构的底层数据生成分布。首先,学习分布式表示可以泛化到学习特征值的新组合,而不是训练期间看到的那些值(例如,对于n个二进制特征,2n个组合是可能的)。第二,在一个深网中组成表示层带来了另一个指数优势(深度指数)的潜力。
  多层神经网络的隐藏层学习以一种容易预测目标输出的方式来表示网络的输入。通过训练一个多层神经网络来预测序列中来自局部的下一个单词,可以很好地证明这一点。上下文中的每个单词都以一个N维向量输入给网络,这个向量的一个分量的值为1,其余的为0。在第一层中,每个字使用不同的激活函数或字向量(图4)。在一个语言模型中,网络的其他层学习将输入词向量转换为预测的下一个词的输出词向量,该向量可用于预测词汇表中任何单词作为下一个单词出现的概率。网络学习包含许多活跃的词向量,其中每一个都可以解释为单词的一个单独的特征。这些语义特征在输入中没有显式呈现。学习过程发现它们是将输入和输出符号之间的结构化关系分解为多个“微规则”的好方法。当单词序列来自大量的真实文本并且单个的微观规则不可靠时,学习单词向量也非常有效。例如,当训练预测新闻报道中的下一个单词时,星期二和星期三所学的单词向量非常相似,瑞典和挪威的单词向量也是如此。这种表示被称为分布式表示,因为它们的元素(特征)不是互斥的,它们的许多配置对应于观测数据中看到的变化。这些词向量是由学习的特征组成的,这些特征不是由专家预先确定的,而是由神经网络自动发现的。从文本中学习单词的向量表示现在在自然语言应用中得到了广泛的应用。
  表征问题是逻辑启发和神经网络启发的认知范式争论的核心。在逻辑启发的范式中,一个符号的实例是唯一的属性是它与其他符号实例相同或不相同。它没有与其使用相关的内部结构;要用符号进行推理,它们必须与经过明智选择的推理规则中的变量绑定在一起。相比之下,神经网络只是使用大活动向量、大权值矩阵和标量非线性来执行快速的“直觉”推理,这种推理支持毫不费力的常识推理。
  在将神经网络引入语言模型之前,没有利用分布式表示的语言统计建模的标准方法:是基于计算长度不超过N的短符号序列(称为N-gram)的出现频率。N-gram的可取值数量是VN的数量级,其中V是词汇量的大小,因此考虑到一个包含多个单词的上下文将需要非常大的训练语料库。N-gram将每个单词作为一个原子单元来处理,因此它们不能在语义相关的单词序列中进行泛化,而神经语言模型可以,因为它们将每个单词与实值特征向量相关联,并且语义相关的单词在该向量空间中彼此接近(图4)。

图4.可视化所学单词向量   左边是为建模语言学习的单词表示的图示,使用t-SNE算法将其非线性投影到2D以进行可视化。右边是由英语到法语的编解码器递归神经网络所学短语的二维表示。我们可以观察到语义相似的单词或单词序列被映射到附近的表示。单词的分布式表示是通过使用反向传播来共同学习每个单词的表示和预测目标数量的函数,例如序列中的下一个单词(用于语言建模)或整个翻译单词序列(用于机器翻译)。

Recurrent neural networks

循环神经网络

  When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a ‘state vector’ that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs.

  当反向传播首次被引入时,它最令人兴奋的用途是训练递归神经网络(RNN)。对于涉及顺序输入的任务,例如语音和语言,使用RNN通常更好(图5)。RNN一次只处理一个输入序列的一个元素,在它们的隐藏单元中维护一个“状态向量”,它隐含地包含序列所有过去元素的历史信息。当我们考虑隐藏单元在不同离散时间步长的输出,就好像它们是深层多层网络中不同神经元的输出一样(图5,右图),我们就可以清楚地知道如何应用反向传播来训练RNN。

图5.递归神经网络及其正向计算所涉及的计算时间展开   人工神经元(例如,在节点s下分组的隐藏单元,在时间t处的值为s t)在之前的时间步(用黑色正方形表示,表示一个时间步的延迟,在左侧)从其他神经元获得输入。这样,一个递归神经网络就可以把一个含有x t元素的输入序列映射成一个含有o t元素的输出序列,每个o t依赖于所有先前的x (对于tʹ≤t)。每个时间步使用相同的参数(矩阵U、V、W)。许多其他的架构是可能的,包括一个变体,其中网络可以生成一系列输出(例如,字),每个输出被用作下一个时间步的输入。反向传播算法(图1)可直接应用于右侧展开网络的计算图,以计算关于所有状态s t和所有参数的总误差(例如,生成正确输出序列的对数概率)的导数。

  RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish.
  Thanks to advances in their architecture and ways of training them, RNNs have been found to be very good at predicting the next character in the text or the next word in a sequence, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English ‘encoder’ network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French ‘decoder’ network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion.
  Instead of translating the meaning of a French sentence into an English sentence, one can learn to ‘translate’ the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently.
  RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long.
  To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory.
  LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation.
  Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a ‘tape-like’ memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions.
  Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught ‘algorithms’. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”.

  RNN是非常强大的动态系统,但是训练它们是有问题的,因为反向传播的梯度在每一个传播都会增长或收缩,因此在许多时间点中,它们通常会爆炸或消失。
  由于其体系结构和训练方式的进步,人们发现RNN非常善于预测文本中的下一个字符或序列中的下一个单词,但它们也可以用于更复杂的任务。例如,在一次读一个英语句子后,可以训练一个英语“编码器”网络,使其隐藏单元的最终状态向量很好地表示句子所表达的思想。然后,这个思想向量可以被用作联合训练的法语“解码器”网络的初始隐藏状态(或作为额外输入),该网络输出法语翻译的第一个单词的概率分布。如果从这个分布中选择一个特定的第一个字并作为输入提供给解码器网络,那么它将输出翻译的第二个字的概率分布,反复执行此操作,直到选择了一个句号。这个过程总的来说就是根据一个依赖于英语句子的概率分布生成法语单词序列。这种相当幼稚的机器翻译方式很快就与最先进的技术效果相差不大,这就引起了人们对理解一个句子是否需要像使用推理规则所操纵的内部符号表达式之类的东西的严重怀疑。它更符合这样一种观点:每个人都有可能根据日常推理许多同时发生的类比而得出结论。
  除了把法语句子的意思翻译成英语句子,人们可以学会把图像的意思“翻译”成英语句子(图3)。这里的编码器是一个深度 ConvNet,它将像素转换成最后一个隐藏层中的活动向量。译码器是一个类似于机器翻译和神经语言建模的RNN。最近人们对这类系统的兴趣激增。
  RNN一旦在时间上展开(图5),就可以看作是非常深的前馈网络,其中所有层共享相同的权重。虽然它们的主要目的是学习长期依赖关系,但理论和经验证据表明,学习将信息存储很长时间是十分困难的。
  为了纠正这一点,一种想法是用显式内存扩充网络。第一种方法是使用特殊隐藏单元的长短期记忆(LSTM)网络,其自然行为是长时间记忆输入。一个称为记忆细胞的特殊单元就像一个累加器或一个门控泄漏神经元:它在下一个时间步有一个连接到它自己,它复制自己的实值状态并累积外部信号,但是这个自我连接被另一个单元乘性地选通,它学习决定何时清除记忆的内容。
  LSTM网络被证明比传统的rnn更有效,尤其是当每个时间点都有多个层时,LSTM网络能够实现从声学到转录中字符序列的整个语音识别系统。LSTM网络或相关形式的选通单元目前也用于编码器和解码器网络,它们在机器翻译方面表现得很好
  在过去的一年里,有几位研究者提出了不同的建议,他们建议用内存模块来扩充RNN。他们建议添加神经图灵机,其中网络由RNN可选择读或写的“磁带状”存储器扩充,以及存储器网络,其中规则网络由一种联想存储器扩充。内存网络已经在标准的问答基准测试中取得了优异的性能。记忆是用来记忆故事的,网络随后被要求回答问题。
  除了简单的记忆,神经图灵机器和记忆网络被用于通常需要推理和符号操作的任务。神经图灵机器可以教“算法”。除其他外,当他们的输入由一个未排序的序列组成时,他们可以学习输出一个已排序的符号列表,其中每个符号都有一个实际值,该值指示其在列表中的优先级。记忆网络可以训练成在一个类似文本冒险游戏的环境中跟踪世界的状态,并且在阅读故事之后,他们可以回答需要复杂推理的问题。在一个测试例子中,网络显示了15句话的《指环王》版本,并正确回答了诸如“佛罗多现在在哪里?”。

The future of deep learning

深度学习的未来

  Unsupervised learning had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object.
  Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-toend and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems at classification tasks and produce impressive results in learning to play many different video games.
  Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time.
  Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors.

  无监督学习对重新点燃人们对深度学习的兴趣方面起到了催化作用,但后来被纯粹监督学习的成功所掩盖。虽然我们在这篇论文中没有关注它,但我们期望无监督学习在后面的日子内会变得更加重要。人类和动物的学习在很大程度上是不受监督的:我们通过观察发现世界的结构,而不是被告知每一个物体的名称。
  人类视觉是一个主动的过程,它使用一个小的、高分辨率的中心凹和一个大的、低分辨率的环绕物,以智能的、特定的方式对光学阵列进行采样。我们预计未来远景的大部分进展将来自经过端到端培训的系统,并将ConvNets与rnn相结合,这些rnn使用强化学习来决定在哪里寻找。将深度学习和强化学习相结合的系统尚处于初级阶段,但它们在分类任务方面的表现已经超过了被动视觉系统,并在学习玩多种不同的视频游戏方面取得了令人印象深刻的成果。
  自然语言理解是另一个领域,深度学习将在未来几年产生重大影响。我们期望使用RNN来理解句子或整个文档的系统在学习一次选择性关注一个部分的策略时会变得更好。
  最终,人工智能的重大进展将通过将表示学习与复杂推理相结合的系统来实现。虽然在语音和笔迹识别中使用深度学习和简单推理已经有很长一段时间了,但是需要新的范式来取代基于规则的符号表达式的操作,而是对大向量进行运算。

Deep learning翻译相关推荐

  1. [论文翻译]Deep Learning 翻译及阅读笔记

    论文题目:Deep Learning 论文来源:Deep Learning_2015_Nature 翻译人:BDML@CQUT实验室 Deep Learning Yann LeCun∗ Yoshua ...

  2. 全文翻译(全文合集):TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

    全文翻译(全文合集):TVM: An Automated End-to-End Optimizing Compiler for Deep Learning 摘要 人们越来越需要将机器学习应用到各种各样 ...

  3. 全文翻译(二): TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

    全文翻译(二): TVM: An Automated End-to-End Optimizing Compiler for Deep Learning 3.优化计算图 计算图是在DL框架中表示程序的常 ...

  4. 全文翻译(一):TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

    全文翻译(一):TVM: An Automated End-to-End Optimizing Compiler for Deep Learning 摘要 人们越来越需要将机器学习应用到各种各样的硬件 ...

  5. #论文 《Wide Deep Learning for Recommender System》翻译

    只是为了深化个人理解,翻译了一下梗概.不追求信达雅,只翻译大意. 概要: 使用非线性特征的广义线性模型(GLM)广泛应用在大规模,输入变量稀疏的回归和分类问题中.其中,通过关于交叉特征的wide模型, ...

  6. Deep Learning 中文翻译

    Deep Learning 中文翻译 在众多网友的帮助和校对下,草稿慢慢变成了初稿.尽管还有很多问题,但至少90%的内容是可读的,并且是准确的. 我们尽可能地保留了原书Deep Learning中的意 ...

  7. AI:《DEEP LEARNING’S DIMINISHING RETURNS—深度学习的收益递减》翻译与解读

    AI:<DEEP LEARNING'S DIMINISHING RETURNS-深度学习的收益递减>翻译与解读 导读:深度学习的收益递减.麻省理工学院的 Neil Thompson 和他的 ...

  8. Deep Learning 教程翻译

    Deep Learning 教程翻译 非常激动地宣告,Stanford 教授 Andrew Ng 的 Deep Learning 教程,于今日,2013年4月8日,全部翻译成中文.这是中国屌丝军团,从 ...

  9. Deep Learning(深度学习) 中文翻译

    https://github.com/exacity/deeplearningbook-chinese 在众多网友的帮助和校对下,草稿慢慢变成了初稿.尽管还有很多问题,但至少90%的内容是可读的,并且 ...

最新文章

  1. 界面 高炉系统_浅议工业互联网与传统计算机系统的关系
  2. K8S集群搭建:虚拟机克隆
  3. 12.4scrum report
  4. 深入理解L1、L2正则化
  5. 关于JS的window.onload与$(function (){})方法区别
  6. mysql数据库导入导出sql文件
  7. 约4万个外国人名,中英对照
  8. 使用模块优化工资计算器
  9. android p正式版一加6,一加6T出厂搭载Android P 将于11月5日发布tokyo hot n0727
  10. php 年会抽奖,PHP+jQuery年会在线拍照抽奖
  11. 关于JavaScript学习,推荐博客及书籍
  12. SAP介绍:概念、核心、开发语言、优缺点与集成
  13. 闽江师范高等专科学校计算机系成立时间,闽江师范高等专科学校2018届毕业典礼...
  14. 长春光机所计算机待遇,有谁知道长春光机所的工资待遇怎么样?硕士一年的收入大概是多少?...
  15. matlab中nargin用法
  16. 狂神系列之HTML学习笔记
  17. 盗链网站服务器,网站被挂盗链,木马,悬镜服务器卫士解决您的困扰
  18. 调用系统相机录像,压缩保存到相册(附仿微信视频录制demo)
  19. 软件著作权-源码清理
  20. C语言for循环详解

热门文章

  1. 交公粮了:十一在家我都逛了哪些技术网站?
  2. 【CISSP备考笔记】第5章 身份与访问管理
  3. 送给新初一家长:进入初中后, 成绩差距是如何拉大的?
  4. 肉眼可见的优越性能,Serpent Canyon 蝰蛇峡谷性能剖析
  5. 从零教你使用webpack,从此项目打包不用愁
  6. 商业模式得到肯定 投资ofo朱啸虎信心满满
  7. 来北京绿源电动车门店,挑选你的心仪坐骑吧~
  8. 百度核心竞争力分析-天蝎座的李彦宏
  9. 《传感器技术》考试学习笔记
  10. 封神台 -尤里的复仇Ⅰ write up 夜车星繁的博客