深度学习与矩阵信号分解

What Google Translate does is nothing short of amazing. In order to engineer the ability to translate between any pair within the dozens of languages it supports, Google Translate’s creators utilized some of the most advanced and recent developments in NLP in exceptionally creative ways.

Google Translate所做的一切令人惊讶。 为了设计在其支持的数十种语言中的任何一对之间进行翻译的能力,Google Translate的创造者以非凡的创造性方式利用了NLP的一些最先进和最新的发展。

In machine translation, there are generally two approaches: a rule-based approach and a machine learning-based approach. Rule-based translation involves the collection of a massive dictionary of translations, perhaps word-by-word or by phrase, which are pieced together into a translation.

在机器翻译中,通常有两种方法:基于规则的方法和基于机器学习的方法。 基于规则的翻译涉及大量海量翻译词典的收集,这些词典可能逐字逐词地或短语地拼凑成翻译。

For one, grammar structures differ significantly between languages. Consider Spanish, in which objects have a masculine or feminine gender. All adjectives and words like ‘the’ or ‘a’ must conform to the gender of the object in which it is describing. Translating ‘the big red apples’ into Spanish would require each of the words ‘the’, ‘big’, and ‘red’ to be written in both plural and feminine form, since those are the attributes of the word ‘apples’. In addition, in Spanish adjectives usually follow the noun, but sometimes they go before.

一方面,语言之间的语法结构差异很大。 考虑一下西班牙语,其中的对象具有男性或女性性别。 所有形容词和单词(例如“ the”或“ a”)必须符合其描述对象的性别。 将“大红苹果”翻译成西班牙语需要将“ the”,“ big”和“ red”两个词分别以复数形式和女性形式书写,因为这些是“ apples”一词的属性。 另外,在西班牙语中,形容词通常跟随名词,但有时在其之前。

Image created by author
图片由作者创建

The result is ‘las [the] grandes [big] manzanas [apples] rojas [red]’. This grammar and the necessity of changing all adjectives doesn’t make any sense to a pure English speaker. Just within English-to-Spanish translation, there are too many disparities in fundamental structure to keep track of. However, a truly global translation requires translation between every pair of languages.

结果是“大[大]曼萨纳斯[苹果]罗哈斯[红色]”。 这种语法以及更改所有形容词的必要性对于纯英语使用者来说毫无意义。 在英语到西班牙语的翻译中,基本结构上的差异太多,无法跟踪。 但是,真正的全球翻译需要对语言之间的翻译。

Within this task arises another problem: to translate between, say, French and Mandarin, the only feasible rule-based solution would be to translate French into a base language — probably English — which would be then translated into Mandarin. This is like playing telephone: the nuance of a phrase said in one language is trampled over by noise and heavy-handed generalization.

在此任务中出现了另一个问题:例如在法语和普通话之间进行翻译,唯一可行的基于规则的解决方案是将法语翻译成基本语言(可能是英语),然后将其翻译成普通话。 这就像玩电话:用一种语言说的短语的细微差别被噪音和粗俗的概括所践踏。

Image created by author
图片由作者创建

The complete hopelessness of rule or dictionary-based translation and the need for some kind of universal model that can learn the vocabulary and the structure of two languages should be clear. Building this model is a difficult task for a few reasons, however:

规则或基于字典的翻译完全没有希望,并且需要某种可以学习两种语言的词汇和结构的通用模型。 但是,出于以下几个原因,构建此模型是一项艰巨的任务:

  • The model needs to be lightweight enough such that it works offline so users can access it even without an Internet connection. Moreover, translation between any two languages should be supported, all downloaded on the user’s phone (or PC).

    该模型必须足够轻巧,以便脱机工作,以便用户即使没有Internet连接也可以访问它。 此外,应该支持任何两种语言之间的翻译, 所有语言都下载在用户的手机(或PC)上。

  • The model must be fast enough to generate live translations.该模型必须足够快才能生成实时翻译。
  • Elaborating on the example above — in English, the words ‘big red apple’ are sequential. If we are to process the data from left-to-right, however, the Spanish translation would be inaccurate, since in that language adjectives, even before the noun in English, change form depending on the noun. The model needs to consider non-sequential translation.详细说明上面的示例-用英语,“大红苹果”一词是顺序的。 但是,如果我们要从左到右处理数据,那么西班牙语的翻译将是不准确的,因为在该语言中,形容词甚至在英语名词之前都会根据名词而改变形式。 该模型需要考虑非顺序翻译。
  • Machine learning-based systems are always heavily reliant on the dataset, which means that words not represented in the data are words the model knows nothing about (it needs a robustness/good memory for rare words). Where would one find a collection of high-quality translated data representative of the entire grammar and vocabulary of a language?基于机器学习的系统始终严重依赖于数据集,这意味着未在数据中表示的单词是模型不知道的单词(对于稀有单词,它需要鲁棒性/良好的记忆力)。 在哪里可以找到代表一种语言的全部语法和词汇的高质量翻译数据的集合?
  • A lightweight model cannot memorize the vocabulary of an entire language. How does the model deal with unknown words?轻量级模型无法记住整个语言的词汇表。 模型如何处理未知词?
  • Many Asian languages like Japanese or Mandarin are based on characters instead of letters. Hence, there is one highly specific character for each word. A machine learning model must be able to translate between a letter-based system like in English, Spanish, or German — which, even containing accented letters, are nevertheless letters — to a character-based one like Korean, and vice versa.许多亚洲语言,例如日语或普通话,都是基于字符而不是字母。 因此,每个单词都有一个非常具体的字符。 机器学习模型必须能够在英语,西班牙语或德语之类的基于字母的系统(甚至包含重音字母但仍然是字母)之间转换为像韩文这样的基于字符的系统,反之亦然。

When Google Translate was initially released, they used a phrase-based algorithm, which is essentially a rule-based method with more complexity. Soon after, however, it drastically improved its quality with the development of Google Neural Machine Translation (GNMT).

Google Translate最初发布时,他们使用了基于短语的算法,这实际上是一种基于规则的方法,具有更高的复杂性。 然而,不久之后,随着Google神经机器翻译(GNMT)的发展,它大大提高了质量。

Source: Google Translate. Image free to share.
资料来源:Google翻译。 图片免费分享。

They considered each of the problems above and came up with innovative solutions, creating an improved Google Translate —now, the world’s most popular free translation service.

他们考虑了上述每个问题,并提出了创新的解决方案,从而创建了改进的Google翻译-现在是世界上最受欢迎的免费翻译服务。

Creating one model for every pair of languages is obviously ridiculous: the number of deep models needed would reach the hundreds, each of which would need to be stored on a user’s phone or PC for efficient usage and/or offline use. Instead, Google decided to create one large neural network that could translate between any two languages, given two tokens (indicators; inputs) representing the languages.

为每对语言创建一个模型显然是荒谬的:所需的深度模型数量将达到数百种,每种深度模型都需要存储在用户的手机或PC上,以便有效使用和/或离线使用。 取而代之的是,Google决定创建一个大型神经网络,该网络可以在两种表示语言的标记(指示符;输入)之间进行翻译,从而可以在任何两种语言之间进行翻译。

The fundamental structure of the model is encoder-decoder. One segment of the neural network seeks to reduce one language into its fundamental, machine-readable ‘universal representation’, whereas the other takes this universal representation and repeatedly transforms the underlying ideas in the output language. This is a ‘Transformer Architecture’; the following graphic gives a good intuition of how it works, how previously generated content plays a role in generating following outputs, and its sequential nature.

该模型的基本结构是编码器-解码器。 神经网络的一个部分试图将一种语言简化为其基本的,机器可读的“通用表示”,而另一部分则采用这种通用表示并反复地将输出语言中的基本思想进行转换。 这是一种“变压器架构”; 下图很好地说明了它的工作原理,先前生成的内容如何在生成后续输出中发挥作用以及其顺序性质。

AnalyticsIndiaMag. Image free to share. AnalyticsIndiaMag 。 图片免费分享。

Consider an alternative visualization of this encoder-decoder relationship (a seq2seq model). The intermediate attention between the encoder and decoder will be discussed later.

考虑此编码器-解码器关系的替代可视化(seq2seq模型)。 编码器和解码器之间的中间关注点将在后面讨论。

Google AI. Image free to share. Google AI 。 图片免费分享。

The encoder consists of eight stacked LSTM layers. In a nutshell, LSTM is an improvement upon an RNN — a neural network designed for sequential data — that allows the network to ‘remember’ useful information to make better future predictions. In order to address the non-sequential nature of language, the first two layers add bidirectionality. Pink nodes indicate a left-to-right reading, whereas green nodes indicate a right-to-left reading. This allows for GNMT to accommodate different grammar structures.

编码器由八个堆叠的LSTM层组成。 简而言之,LSTM是对RNN(针对顺序数据设计的神经网络)的改进,它使网络能够“记住”有用的信息,从而做出更好的未来预测。 为了解决语言的非顺序性质,前两层添加了双向性。 粉色节点表示从左到右的读数,而绿色节点表示从右到左的读数。 这允许GNMT适应不同的语法结构。

Source: GNMT Paper. Image free to share.
资料来源:GNMT文件。 图片免费分享。

The decoder model is also composed of eight LSTM layers. These seek to translate the encoded content into the new language.

解码器模型也由八个LSTM层组成。 这些试图将编码的内容翻译成新的语言。

An ‘attention mechanism’ is placed between the two models. In humans, our attention helps us keep focus on a task by looking for answers to that task and not additional irrelevant information. In the GNMT model, the attention mechanism helps identify and amplify the importance of unknown segments of the message, which are prioritized in the decoding. This solves a large part of the ‘rare words problem’, in which words that appear less often in the dataset are compensated with more attention.

在两个模型之间放置一个“注意机制”。 在人类中,我们的注意力通过寻找该任务的答案而不是其他不相关的信息来帮助我们专注于一项任务。 在GNMT模型中,注意力机制有助于识别和放大消息中未知片段的重要性,这些片段在解码时会优先处理。 这解决了“稀有词问题”的很大一部分,其中在数据集中出现频率较低的词得到了更多关注。

Skip connections, or connections that jump over certain layers, were used to stimulate healthy gradient flow. As is with the ResNet (Residual Network) model, updating gradients may be caught up at one particular layer, affecting all the layers before it. With such a deep network comprising of 16 LSTMs in total, it is imperative not only for training time but for performance that skip connections be employed, allowing gradients to cross potentially problematic layers.

跳过连接或跳过某些层的连接用于刺激健康的梯度流。 与ResNet(残差网络)模型一样,更新梯度可能会在一个特定的层上被捕获,从而影响到它之前的所有层。 对于这样一个由总共16个LSTM组成的深度网络,不仅对于培训时间而且对于性能而言,都必须跳过连接,从而允许梯度跨越可能存在问题的层。

Source: GNMT Paper. Image free to share.
资料来源:GNMT文件。 图片免费分享。

The builders of GNMT invested lots of effort into developing an efficient low-level system that ran on TPU (Tensor Processing Unit), a specialized machine-learning hardware processor designed by Google, for optimal training.

GNMT的创建者投入了大量精力来开发一种高效的低级系统,该系统运行在TPU(张量处理单元)上,TPU是Google设计的专用机器学习硬件处理器,用于最佳培训。

An interesting benefit of using one model to learn all the translations was that translations could be indirectly learned. For instance, if GNMT were trained only on English-to-Korean, Korean-to-English, Japanese-to-English, and English-to-Japanese, the model yielded good translations for Japanese-to-Korean and Korean-to-Japanese translation, even though it had never been directly trained on it. This is known as zero-shot learning, and significantly improved the required training time for deployment.

使用一种模型学习所有翻译的一个有趣的好处是可以间接学习翻译。 例如,如果GNMT仅接受了英语对韩语,韩语对英语,日语对英语和英语对日语的培训,那么该模型就可以很好地为日语对韩语和朝鲜语对英语进行翻译日语翻译,即使从未接受过日语翻译。 这被称为零击学习,并且大大缩短了部署所需的培训时间。

AnalyticsIndiaMag. Image free to share. AnalyticsIndiaMag 。 图片免费分享。

Heavy pre-processing and post-processing is done on the inputs and outputs of the GNMT model in order to support, for example, the highly specialized characters found often in Asian languages. Inputs are tokenized according to a custom-designed system, with word segmentation and markers for the beginning, middle, and end of a word. These additions made the bridge between different fundamental representations of language more fluid.

对GNMT模型的输入和输出进行大量的预处理和后处理,以例如支持亚洲语言中经常出现的高度专业化的字符。 输入是根据定制设计的系统标记的,带有单词分段和单词开头,中间和结尾的标记。 这些添加使语言的不同基本表示之间的桥梁更加流畅。

For training data, Google used documents from the United Nations and the European Parliament’s documents and transcripts. Since these organizations contained information professionally translated between many languages — with high quality (imagine the dangers of a badly translated declaration) — this data was a good starting point. Later on, Google began using user (‘community’) input to strengthen cultural-specific, slang, and informal language in its model.

对于培训数据,Google使用了来自联合国的文件以及欧洲议会的文件和成绩单。 由于这些组织包含在多种语言之间进行专业翻译的信息(质量很高(想象翻译不当的危险),因此这些数据是一个很好的起点。 后来,Google开始使用用户(“社区”)输入来增强其模型中特定于文化的,语和非正式语言。

GNMT was evaluated on a variety of metrics. During training, GNMT used log perplexity. Perplexity is a form of entropy, particularly ‘Shannon entropy’, so it may be easier to start from there. Entropy is the average number of bits to encode the information contained in a variable, and so perplexity is how well a probability model can predict a sample. One example of perplexity would be the number of characters a user must type into a search box for a query proposer to be at least 70% sure the user will type any one query. It is a natural choice for evaluating NLP tasks and models.

对GNMT进行了多种评估。 在训练期间,GNMT使用了日志困惑。 困惑是熵的一种形式,特别是“香农熵”,因此从那里开始可能更容易。 熵是对变量中包含的信息进行编码的平均位数,因此困惑度是概率模型预测样本的能力。 困惑的一个例子是,用户必须在搜索框中键入一个字符数,查询提议者才能至少确保70%的用户可以键入任何一个查询。 这是评估NLP任务和模型的自然选择。

The standard BLEU score for language translation attempts to measure how close the translation was to a human one, on a scale from 0 to 1, using a string-matching algorithm. It is still widely used because it has shown strong correlation with human-rated performance: correct words are rewarded, with bonuses for consecutive correct words and longer/more complex words.

语言翻译的标准BLEU分数试图使用字符串匹配算法以0到1的比例来衡量翻译与人类翻译的接近程度。 它仍被广泛使用,因为它已显示出与人类评价的性能密切相关:奖励正确的单词,并为连续正确的单词和更长/更复杂的单词提供奖励。

However, it assumes that a professional human translation is the ideal translation, only evaluates a model on select sentences, and does not have much robustness to different phrasing or synonyms. This is why a high BLEU score (>0.7) is usually a sign of overfitting.

但是,它假定专业的人工翻译是理想的翻译,仅对所选句子评估模型,并且对不同的措词或同义词没有足够的鲁棒性。 这就是为什么BLEU分数较高(> 0.7)通常表示过度拟合的原因。

Regardless, an increase in BLEU score (represented as a fraction) has shown an increase in language-modelling power, as demonstrated below:

无论如何,BLEU分数的提高(表示为分数)显示出语言建模能力的提高,如下所示:

Google AI. Image free to share. Google AI 。 图片免费分享。

Using the developments of GNMT, Google launched an extension that could perform visual real-time translation of foreign text. One network identified potential letters, which were fed into a convolutional neural network for recognition. The recognized words are then fed into GNMT for recognition and rendered in the same font and style as the original.

借助GNMT的发展,Google推出了一项扩展程序,可以执行外文的可视实时翻译。 一个网络识别出潜在的字母,然后将其输入到卷积神经网络中进行识别。 然后将识别出的单词输入到GNMT中进行识别,并以与原始字体相同的字体和样式进行呈现。

Source: Google Translate. Image free to share.
资料来源:Google翻译。 图片免费分享。

One can only imagine the difficulties abound in creating such a service: identifying individual letters, piecing together words, determining the size and font of text, properly rendering the image.

人们只能想象创建此类服务时会遇到很多困难:识别单个字母,将单词拼凑在一起,确定文本的大小和字体,正确渲染图像。

GNMT appears in many other applications, sometimes with a different architecture. Fundamentally, however, GNMT represents a milestone in NLP, with the wonders of a lightweight yet effective design building upon years of NLP breakthroughs incredibly accessible to everyone.

GNMT出现在许多其他应用程序中,有时具有不同的体系结构。 但是,从根本上讲,GNMT代表了NLP的里程碑,其奇迹在于轻巧而有效的设计基于多年的NLP突破,每个人都难以置信。

关键点 (Key Points)

  • There are many challenges when it comes to providing a truly global translation services. The model must be lightweight, but also understand the vocabulary, grammar structures, and relationships between dozens of languages.提供真正的全球翻译服务面临许多挑战。 该模型必须是轻量级的,而且必须了解词汇表,语法结构以及数十种语言之间的关系。
  • Rule-based translation systems, even more complex phrase-based ones, fail to perform well at translation tasks.基于规则的翻译系统,甚至更复杂的基于短语的翻译系统,在翻译任务上的表现均不佳。
  • GNMT uses a Transformer Architecture, in which an encoder and decoder, composed of 8 LSTMs each. The first two layers of the encoder allow for bidirectional reading to accommodate non-sequential grammar.GNMT使用一种Transformer体系结构,其中的编码器和解码器分别由8个LSTM组成。 编码器的前两层允许双向读取以适应非顺序语法。
  • The GNMT model uses skip connections to promote healthy gradient flow.GNMT模型使用跳过连接来促进健康的梯度流。
  • GNMT developed zero-shot learning, which allowed for significantly faster growth and understanding in training.GNMT开发了零击学习,可以大大加快培训的增长和了解。
  • The model was trained on log perplexity and evaluated formally using the standard BLEU score.对模型进行对数困惑度训练,并使用标准BLEU评分进行正式评估。

With the advancements of GNMT — beyond text-to-text translation but image-to-image and sound-to-sound translation — deep learning has made one huge leap towards the understanding of human language. Its applications, not as an esoteric and impractical model but as an innovative, lightweight, and highly usable one, are unbounded. In many ways, GNMT is one of the most accessible and practical culmination of years of cutting-edge NLP research.

随着GNMT的发展(从文本到文本的翻译,但从图像到图像的翻译和声音到声音的翻译),深度学习在理解人类语言方面迈出了一大步。 它的应用不是无限的,不现实的模型,而是一种创新,轻便且高度可用的模型。 从许多方面来说,GNMT是多年来最前沿的NLP研究中最容易获得和最实用的成果之一。

This was just a peek into the fascinating machine learning behind Google Translate. You can read the full-length paper here and visit the interface for yourself here.

这只是对Google Translate背后有趣的机器学习的一瞥。 你可以阅读全长纸这里参观的界面为自己在这里 。

翻译自: https://medium.com/analytics-vidhya/breaking-down-the-innovative-deep-learning-behind-google-translate-355889e104f1

深度学习与矩阵信号分解


http://www.taodudu.cc/news/show-5918306.html

相关文章:

  • Alfred 配置google翻译
  • android+siri人工智能语言软件,苹果系统新增翻译功能,网友惊呼人工智能太强大...
  • ios--苹果应用商店审核指南中文翻译
  • 这个冬天,不孤单
  • 火狐浏览器xpath工具(Try Xpath)
  • 是不是发现Python越来越火了?让我来告诉你为什么!
  • 编程和数学是什么关系?编程学习为什么会这么火呢?
  • 是不是发现Python越来越火了?让我来告诉你为什么
  • 火猴之动画字幕显示(firemonkey)
  • 火狐调试javascript
  • Ubuntu:为 Firefox 浏览器 安装 flash 插件
  • ASP.NET 页面传值中文乱码问题
  • 前端单元测试---孤勇者级教程
  • 自定义 Firefox TLS支持版本s
  • Python版 孤勇者 | 画图+音乐可视化
  • 谷歌浏览器和火狐浏览器设置跨域和https、http混用
  • 解决Firefox不响应window.resize事件
  • 火狐浏览器安装Live HTTP headers步骤记录
  • 站长必装软件之火孤插件
  • 火孤工具
  • IOS 学习参考
  • HTML5(六).Input 类型
  • 机器学习算法之SVM(二)概述
  • linux系统更改计算机名称,如何在Linux中设置或更改主机名
  • Token系列 - STO证券币交易所 - OpenFinance
  • day3 线程
  • deepin 安装 nvidia-docker2
  • 【PaddleNLP-kie】关键信息抽取2:UIE模型做图片信息提取全流程
  • 当魔法师挥舞起大刀!
  • Android项目实战之高仿网易云音乐项目介绍

深度学习与矩阵信号分解_分解谷歌翻译背后的创新深度学习相关推荐

  1. 打破学习的玻璃墙_打破Google背后的创新深度学习

    打破学习的玻璃墙 What Google Translate does is nothing short of amazing. In order to engineer the ability to ...

  2. 深度学习与计算机视觉系列(3)_线性SVM与SoftMax分类器--在深度学习的视觉分类中的,这两个分类器的原理和比较

    作者: 寒小阳  时间:2015年11月.  出处:http://blog.csdn.net/han_xiaoyang/article/details/49999299  声明:版权所有,转载请注明出 ...

  3. 深度限流装置是什么_密山FLS型极速零损耗深度限流装置提供商,母线残压保持...

    密山FLS型极速零损耗深度限流装置提供商 安徽大山电气有限公司业务涵盖电气控制系统.高低压配电.驱动与自动化产品.智能系统.软件工程.机电工程总承包六大业务板块,相关产品广泛应用于港口.造船.铁路.盾 ...

  4. 安卓开发学习日记第三天_新手怪button_莫韵乐的欢乐笔记

    安卓开发学习日记第三天--新手怪button (不是buttercup,虽然里面好像也有button,心中已经响起那段音乐了) 前情提要: 第一天学习日记之安装Android Studio3.6 第二 ...

  5. 安卓开发学习日记第四天_会爬就会跑_莫韵乐的欢乐笔记

    安卓开发学习日记第四天_会爬就会跑 前情提要 安卓开发学习日记第一天Android Studio3.6安装 安卓开发学习日记第二天_破坏陷阱卡之sync的坑 安卓开发学习日记第三天_新手怪button ...

  6. python 谷歌翻译模块和js解密的一次学习记录

    文章目录 一.说明: 二. googletrans模块的学习使用: 1.安装: 2.简单使用: 1.基本翻译的用法(也是常用的用法): 2.检测文本语言: 3.英文翻译成中文(其他语言类似): 4.多 ...

  7. java 矩阵分解_矩阵论学习笔记四:矩阵分解 | 学步园

    参考书:<矩阵论>第3版,程云鹏 张凯院 徐仲编著 西北工业大学出版社 矩阵的三角分解和QR分解等在计算数学中都扮演着十分重要的角色,尤其是以QR分解所建立的QR方法,以对数值线性代数理论 ...

  8. 机器学习处理信号分离_[学习笔记]使用机器学习和深度学习处理信号基础知识...

    参考学习:Signal Generation and Preprocessing 本人只是为了了解信号处理的基础知识而做的学习笔记,涉及深度可能不够,有理解错误的地方请大胆指出,感激不尽 一.信号生成 ...

  9. python矩阵施密特标准型_矩阵与数值计算(3)——Schur标准型和Jordan分解

    前言 之前介绍过几种矩阵分解方法,都可以有效的提升矩阵方程的数值求解问题,其中LU分解尤其适合于中小型.稠密矩阵的求解问题.我们最理想的矩阵就是可相似对角化的矩阵,直接可以分解成两个酉矩阵和一个对角矩 ...

最新文章

  1. 数学建模学习笔记——时间序列分析
  2. DL之GANDCGNNcGAN:GANDCGNNcGAN算法思路、关键步骤的相关配图和论文集合
  3. 组织模式 - Introduction
  4. migration mysql_MySql 使用 EF Core 2.0 CodeFirst、DbFirst、数据库迁移(Migration)介绍及示例...
  5. html 01前沿-web介绍
  6. 【操作系统】进程调度(2a):SJF(短任务优先) 算法 原理与实践
  7. 04.卷积神经网络 W2.深度卷积网络:实例探究
  8. Pytorch-torchvision源码解读:ASPP
  9. 循环输出26个字母C语言,菜鸟求助,写一个随机输出26个英文字母的程序
  10. Docker镜像下载加速的两种方法
  11. 纠错码与魔术(一)——纠错码与汉明码简介
  12. java笔记之基础-outer标签
  13. 万测试验机软件,万测关注检查井盖质量检测
  14. python读excel并写入_Python读取Excel文件并写入数据库
  15. C++的Json解析库:jsoncpp
  16. Pycharm安装第三方库的方法
  17. Java程序猿搬砖笔记(七)
  18. 当内容创作者的“后盾”,酷开网络铸就“大屏时代”新生态
  19. Android 下拉刷新实践
  20. 【Python】京东抢购脚本

热门文章

  1. Pytorch基础知识(9)单目标分割
  2. 卡巴斯基因产品问题破坏用户网络致歉
  3. [Linux] xargs 命令的神奇之处。
  4. 爬虫数据分析【旅游篇】
  5. 1-基于单片机的城市轨道交通列车超速防护系统_里程表设计(原理图+PCB+源码+仿真工程+答辩论文)
  6. 富文本在服务器上图片不显示,解决CKEditor 4 富文本编辑器在图片组件无法显示[上传]选项卡的相关问题...
  7. 基于STM32进行OLED显示
  8. 2022第七届中国少儿明星盛典 昆明赛区 初赛圆满落幕
  9. 双目视觉三维成像原理
  10. TEXlive+textmaker