From time to time I would read some ML/AI/DL papers just to keep up with what’s going on in the tech industry these days — and I thought it might be a good idea to collate some interesting ones and share them with you guys in an article (coupled with some key technical concepts). So here’s a few research projects that I (personally) really like and I hope you will too:

我会不时阅读一些ML / AI / DL论文,以跟上当今科技行业的发展-我认为整理一些有趣的文章并与您分享这些想法可能是个好主意。一篇文章(结合一些关键技术概念)。 因此,这是我(个人)非常喜欢的一些研究项目,希望您也能:

1.自律 (1. Toonify Yourself)

Justin Pinkney’s blog — Source from: Justin Pinkney的博客-来源: link 链接

Authors: Doron Adler and Justin Pinkney

作者:多伦·阿德勒( Doron Adler)贾斯汀·平克尼( Justin Pinkney)

Starting off with something light-hearted, this fun little project lets you upload your own image and transform yourself into a cartoon. Image processing and manipulation probably isn’t something very new by this point in time but it is still very interesting considering that this site even provides you with the Google Collab file that allows you to try to toonify yourself + allows you to follow along the steps so you can recreate it for yourself.

从一个轻松愉快的项目开始,这个有趣的小项目使您可以上传自己的图像并将自己变成卡通。 目前,图像处理和操作可能还不是什么新鲜事物,但是考虑到该站点甚至为您提供了Google Collab文件,使您可以尝试自我统一+允许您按照步骤进行操作,这仍然非常有趣。因此您可以自己重新创建它。

Some key technical ideas/concepts:

一些关键的技术构想/概念:

This project is a network blending/layer swapping project in StyleGAN and they’ve used pre-trained models to do transfer learning. Specifically, they utilized two models — a fine-tuned model (created from the base model) and a base model and swapped layers between the two to produce such results. High resolution layers are taken from the base model and low resolution layers are taken from the fine-tuned model. Next, a latent vector is produced from an original image that we want to “toonify” and this latent vector would act as the input to our blended model (this latent vector would look very similar to the original image). After inputting this latent vector into the blended model, the toonified version would be created.

该项目是StyleGAN中的网络混合/层交换项目,他们使用了预先训练的模型来进行迁移学习。 具体来说,他们利用了两个模型-一个经过微调的模型(从基本模型创建)和一个基本模型,并在两者之间交换了图层以产生这样的结果。 高分辨率层取自基本模型,低分辨率层取自微调模型。 接下来,从原始图像中生成一个潜在矢量,我们想要对其进行“ toonify”处理,并且该潜在矢量将作为我们混合模型的输入(此潜在矢量看上去与原始图像非常相似)。 在将该潜在矢量输入到混合模型中之后,将创建音化版本。

Read more about it here.

在此处了解更多信息。

2.手提电话 (2. Lamphone)

link 链接

Authors: Nassi, Ben and Pirutin, Yaron and Shamir, Adi and Elovici, Yuval and Zadov, Boris

作者:Nassi,Ben和Pirutin,Yaron和Shamir,Adi和Elovici,Yuval和Zadov,Boris

I find this to be slightly creepy — Lamphone is a novel side-channel attacking technique that allows eavesdroppers to recover speech and non-speech information by recovering sound from the optical measurements obtained from the vibrations of a light bulb and captured by the electro-optical sensor.

我发现这有点令人毛骨悚然-Lamphone是一种新颖的旁通道攻击技术,它允许窃听者通过从灯泡振动获得的光学测量结果中恢复声音并通过电光捕获来恢复语音和非语音信息。传感器。

Some key technical ideas/concepts:

一些关键的技术构想/概念:

The setup involves having a telescope with an electro-optical sensor mounted on it that can detect the vibration patterns of the hanging bulb from the victim’s room. Specifically, they’ve exploited the fluctuations in air pressure around the hanging bulb (this causes the hanging bulb to vibrate). For this particular experiment, they were capable of successfully recovering the song played in the room and a snippet of Trump’s speech recording from a distance of 25 meters when they used a $400 electro-optical sensor. It is said that this distance could be extended with more high-range equipment.

设置包括安装一个带有光电传感器的望远镜,该望远镜可以检测受害人房间悬挂灯泡的振动模式。 具体来说,他们利用了悬挂灯泡周围的气压波动(这会导致悬挂灯泡振动)。 对于这个特定的实验,当他们使用价格为400美元的光电传感器时,他们能够从25米的距离成功恢复房间里播放的歌曲和特朗普语音记录的片段。 据说,可以通过使用更多的高射程设备来扩大这个距离。

Read more about it here.

在此处了解更多信息。

3. GANPaint Studio (3. GANPaint Studio)

MIT-IBM Watson AI Lab — Source from: MIT-IBM Watson AI Lab的研究人员-来源: link 链接

Authors: David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, Antonio Torralba

作者:大卫·鲍(David Bau),亨德里克·斯特博尔特(Hendrik Strobelt),威廉·皮布尔斯(William Peebles),乔纳斯·沃尔夫(Jonas Wulff),周伯乐,朱俊彦,安东尼奥·托拉尔巴

This was the project that I got really excited about when I was collating this list. This is basically a tool that allows you to “blend” in an image object over the highlighted portion of the original image (you get to decide where to highlight as well). I have always been really excited about exploring the idea of allowing people without formal art training to easily create new images/artworks. I find the process of creating new artworks and images to be really enjoyable creative process and hopefully with more projects like these, the machine can play an assisting role that will encourage more people to exercise their creative muscle and create art. In this case, even though their tool is not meant to create art, I don’t think it is that far-fetched to have a variation of their tool that WILL assist users to create art instead.

当我整理这份清单时,我对这个项目感到非常兴奋。 基本上,这是一个工具,可让您将图像对象“融合”在原始图像的突出显示部分上(您也可以决定要突出显示的位置)。 我一直对探索允许未经正式艺术训练的人轻松创建新图像/艺术品的想法感到非常兴奋。 我发现创建新艺术品和图像的过程是非常有趣的创作过程,希望通过更多此类项目,机器可以起到辅助作用,鼓励更多的人锻炼自己的创造力和创造艺术。 在这种情况下,即使他们的工具不是用来创造艺术品的,我也不认为拥有各种各样的工具会帮助用户创造艺术品并不是一件容易的事。

Some key technical ideas/concepts:

一些关键的技术构想/概念:

The interesting thing about this project is that it synthesizes new content that follows both the user’s intention and the natural image statistics. Their image manipulation pipeline consists of a three step process — computing the latent vector of the original image, applying a semantic vector space operation in the latent space, then finally regenerating the image from the modified image from the previous step. Here’s what they’ve written in their original paper:

这个项目的有趣之处在于,它可以根据用户的意图和自然图像统计信息合成新内容。 他们的图像处理流程包括三个步骤:计算原始图像的潜在向量,在潜在空间中应用语义向量空间操作,然后最终从上一步的修改后的图像中重新生成图像。 这是他们在原始论文中写的:

Given a natural photograph as input, we first re-render the image using an image generator. More concretely, to precisely reconstruct the input image, our method not only optimizes the latent representation but also adapts the generator. The user then manipulates the photo using our interactive interface, such as by adding or removing specific objects or changing their appearances. Our method updates the latent representation according to each edit and renders the final result given the modified representation. Our results look both realistic and visually similar to the input natural photograph.

给定自然照片作为输入,我们首先使用图像生成器重新渲染图像。 更具体地,为了精确地重建输入图像,我们的方法不仅优化了潜在表示,而且使生成器适应。 然后,用户可以使用我们的交互界面来处理照片,例如添加或删除特定对象或更改其外观。 我们的方法根据每次编辑来更新潜在表示,并在修改后的表示下渲染最终结果。 与输入的自然照片相比,我们的结果看上去既逼真又视觉相似。

Read more about it or try out the demo here.

了解更多信息或在此处尝试演示。

4.点唱机 (4. Jukebox)

link 链接

Authors: Prafulla Dhariwal, Heewoo Jun, Christine McLeavey Payne (EQUAL CONTRIBUTORS), Jong Wook Kim, Alec Radford, Ilya Sutskever

作者: Prafulla Dhariwal Heewoo Jun Christine McLeavey Payne (平等贡献者), Jong Wook Kim Alec Radford Ilya Sutskever

In a nutshell, this project utilizes neural networks to generate music. Given the genre, artist and lyrics as input, Jukebox would output new music sample produced from scratch.

简而言之,该项目利用神经网络生成音乐。 给定流派,艺术家和歌词作为输入,Jukebox将输出从头开始产生的新音乐样本。

link 链接

Automated music generation is not exactly new technology — previously, approaches include symbolically generating music. However, for these generators, they often cannot capture essential musical elements like human voices, subtle timbres, dynamics, and expressiveness. I spent some time listening to some of the musical tracks listed in their “Jukebox Sample Explorer” page and personally felt like it’s still not at the level where listeners will be unable to differentiate between the real track VS their auto-generated track. However, I think it is definitely a exciting to see where projects like these will take us and what this means to our music industry in the near future.

自动音乐生成并非完全是新技术,以前的方法包括象征性地生成音乐。 但是,对于这些生成器,它们通常无法捕获基本的音乐元素,例如人声,微妙的音色,动态和表现力。 我花了一些时间听他们在“ Jukebox示例资源管理器”页面中列出的一些音乐曲目,个人觉得这还不能使听众无法区分真实曲目和自动生成的曲目。 但是,我认为看到这样的项目将带给我们什么,以及在不久的将来对我们的音乐行业意味着什么绝对令人兴奋。

Some key technical ideas/concepts:

一些关键的技术构想/概念:

Their approach is a two-step process — the first step consists of compressing music to discrete codes and the second step involves generating codes using transformers. They have a really nice diagram explaining this process and I’ll include them here:

他们的方法分两步进行:第一步是将音乐压缩为离散代码,第二步是使用转换器生成代码。 他们有一个非常漂亮的图表来解释这个过程,我将在这里包括它们:

link) link的源)
link) 连结来源)

How the compression part works — they utilize a modified version of the Vector Quantised-Variational AutoEncoder (VQ-VAE-2) (generative model for discrete representation learning). With reference to Figure A above, their 44kHz raw audio is compressed 8x, 32x, and 128x, with a codebook size of 2048 for each level. If you go to their site and try clicking on the sound icon to hear how each reconstructed audio sounds like, the right-most one will sound the noisiest since it is compressed 128x and only the very essential features is retained (e.g. pitch, timbre, and volume).

压缩部分的工作原理-他们利用了矢量量化自动编码器(VQ-VAE-2)的改进版本(用于离散表示学习的生成模型)。 参考上面的图A,它们的44kHz原始音频被压缩为8x,32x和128x,每个级别的密码本大小为2048。 如果您访问他们的网站并尝试单击声音图标以听到每种重建的音频的声音,则最右边的声音将听起来最嘈杂,因为它被压缩128倍并且仅保留了非常重要的功能(例如音高,音色,和体积)。

How the generation part works — As mentioned in the VQ-VAE paper and the VQ-VAE-2 paper, a powerful autoregressive decoder is used (but in Jukebox’s algorithm, separate decoders are used and input from the codes of each level is independently reconstructed to maximize the use of the upper levels). This generative phase is essentially about (from their official site):

生成部分的工作原理-如VQ-VAE论文和VQ-VAE-2论文中所述,使用了功能强大的自回归解码器(但在Jukebox的算法中,使用了独立的解码器,并且独立重建了每个级别的代码的输入)最大限度地利用上层)。 这个生成阶段实质上是关于(从其官方站点开始):

(training) the prior models whose goal is to learn the distribution of music codes encoded by VQ-VAE and to generate music in this compressed discrete space. […] The top-level prior models the long-range structure of music, and samples decoded from this level have lower audio quality but capture high-level semantics like singing and melodies. The middle and bottom upsampling priors add local musical structures like timbre, significantly improving the audio quality. […] Once all of the priors are trained, we can generate codes from the top level, upsample them using the upsamplers, and decode them back to the raw audio space using the VQ-VAE decoder to sample novel songs.

(训练)现有模型,其目标是学习由VQ-VAE编码的音乐代码的分布并在此压缩离散空间中生成音乐。 […]最高级别的先验模型对音乐的远程结构进行了建模,从此级别解码的样本具有较低的音频质量,但捕获了诸如唱歌和旋律之类的高级语义。 中间和底部的升采样先验添加了诸如音色之类的本地音乐结构,从而大大改善了音频质量。 […]所有先验知识都经过培训后,我们可以从顶层生成代码,使用上采样器对其进行上采样,然后使用VQ-VAE解码器将其解码回原始音频空间,以对新颖歌曲进行采样。

Read more about it or try out the demo here.

了解更多信息或在此处尝试演示。

5.通过个性化图片字幕 (5. Engaging Image Captioning via Personality)

link 链接

Authors: Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, Jason Weston

作者:Kurt Shuster,Samuel Humeau,HuHexiang Hu,Antoine Bordes,Jason Weston

I came across this research paper when browsing through the research projects listed in Facebook AI’s conversational AI category. Traditionally, image captioning tasks are usually very factual and emotionless (e.g. labels like “This is a cat” or “Boy playing football with friends”). In this project, they aim to inject emotions or personalities into their captions to make their captions seem more humane and engaging. Citing from their paper, “ the goal is to be as engaging to humans as possible by incorporating controllable style and personality traits”.

浏览Facebook AI的对话式AI类别中列出的研究项目时,我遇到了这篇研究论文。 传统上,图像字幕任务通常非常真实,毫无感情(例如,“这是一只猫”或“男孩和朋友踢足球”之类的标签)。 在此项目中,他们旨在将情绪或个性注入字幕中,以使字幕看起来更人性化和更具吸引力。 从他们的论文中引用:“目标是通过整合可控的风格和个性特征,尽可能地与人类互动”。

Hopefully, this project will be used for good — the idea of machines/bots communicating like humans seems a little unsettling to me…Nonetheless, it is still a pretty interesting project.

希望这个项目会有用处-机器/机器人像人类一样进行交流的想法对我来说似乎有些不解……尽管如此,它仍然是一个非常有趣的项目。

Some key technical ideas/concepts:

一些关键的技术构想/概念:

link 链接

From the paper:

从本文:

For image representations, we employ the work of [32] (D. Mahajan, R. B. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten. Exploring the limits of weakly supervised pretraining. CoRR, abs/1805.00932, 2018.) that uses a ResNeXt architecture trained on 3.5 billion social media images which we apply to both. For text, we use a Transformer sentence representation following [36] (P.-E. Mazare, S. Humeau, M. Raison, and A. Bordes. ´ Training Millions of Personalized Dialogue Agents. ArXiv e-prints, Sept. 2018.) trained on 1.7 billion dialogue examples. Our generative model gives a new state-of-the-art on COCO caption generation, and our retrieval architecture, TransResNet, yields the highest known R@1 score on the Flickr30k dataset.

对于图像表示,我们采用了[32]的工作(D. Mahajan,RB Girshick,V。Ramanathan,K。He,M。Paluri,Y。Li,A。Bharambe和L. van der Maaten。探索极限弱监督的预训练(CoRR,abs / 1805.00932,2018.),该方法使用ResNeXt架构对35亿个社交媒体图像进行了训练,我们将两者应用于这两个方面。 对于文本,我们使用[36]之后的Transformer句子表示形式(P.-E. Mazare,S。Humeau,M。Raison和A. Bordes。“培训数百万个性化对话代理。ArXiv电子版,2018年9月。)接受了17亿个对话示例的培训。 我们的生成模型提供了有关COCO字幕生成的最新技术,而我们的检索体系结构TransResNet在Flickr30k数据集上产生的已知R @ 1得分最高。

Read more about it here.

在此处了解更多信息。

Bonus: Three more interesting projects (I’ll just include links for now [because researching and writing this article took up way too much time rip]), but comment if you want technical explanations or summaries!) —

奖励:另外三个有趣的项目(我现在仅包括链接(因为研究和撰写本文占用了太多时间),但是如果您需要技术说明或摘要,请发表评论!)

6.聆听5秒钟后会克隆您的声音的AI:链接 (6. AI that clones your voice after listening for 5 seconds: link)

7.从您的照片创建真实场景的AI:链接(7. AI that creates real scenes from your photos: link)

8.恢复旧图像:链接(8. Restoring old images: link)

结束语(Concluding remarks)

When I was reading through the various deep tech related research papers, I really started to worry about how easily these techniques can potentially be misused and cause detrimental consequences to our modern society as a whole. Even though some of them may seem innocuous, the gap between an innocent toy project and a powerful manipulative tool is extremely narrow. I sincerely hope there would be some frameworks or standards soon so we acquire a controlled assistant rather than unleash a monster.

当我阅读各种与深度技术相关的研究论文时,我真的开始担心这些技术有多么容易被滥用并给整个现代社会带来不利影响。 即使其中一些看上去无害,但无辜的玩具项目与强大的操纵工具之间的差距却非常狭窄。 我衷心希望很快会有一些框架或标准,以便我们获得受控助手而不是释放怪物。

That’s all I have to share for now, if I get any of the concepts or ideas wrong I do sincerely apologize — if you spot any mistakes please feel free to reach out to me so I can correct it as soon as possible.

这就是我现在要分享的所有内容,如果我对任何概念或想法有误,我都深表歉意。如果您发现任何错误,请随时与我联系,以便我尽快予以纠正。

Thank you for reading this article.

感谢您阅读本文。

翻译自: https://medium.com/@m.fortitudo.fr/5-mind-blowing-machine-learning-deep-tech-projects-b33479318986


http://www.taodudu.cc/news/show-5795993.html

相关文章:

  • 在“打破次元壁”这件事上,以太坊做得超乎你想象
  • 如何识别一只猫 和 一只兔子?
  • 来淘派,跳入Web 3.0的“兔子洞”
  • 卡通形象“兔公爵”要做内衣时尚新宠儿!
  • 冷笑话2
  • c++第八周【任务2】实现Time类中的运算符重载
  • Kafka3.0.0版本—— Kafka概述
  • Kafka之Docker部署Kafka kraft集群
  • kafka简单介绍
  • Java修改图片大小尺寸图片缩放(URL图片和本地图片)
  • android主流型号手机本地录音路径
  • 【ThreeJS基础教程-高级几何体篇】2.6.1 BufferGeometry应用1-物体居中与包围盒的应用
  • 一篇文章让你清楚浮动
  • 关于css盒子模式取色和获取像素
  • 机器学习与分布式机器学习_为什么机器学习没有捷径
  • css3 box-shadow 盒子效果
  • “神秘盒子”或威胁全球运输船
  • 为什么机器学习没有捷径
  • CSS盒子模型——看这一篇就够了
  • 自定义Fx控件的捷径
  • 浅析中国移动上云的有效捷径——部署云盒子企业网盘!
  • 免费节点2:使用捷径添加_盒子:找到性能瓶颈的捷径
  • 港科科研|尘埃落定!Grab登陆纳斯达克,那么一夜狂欢之后呢?
  • C#CAD二次开发学习: 多段线Polyline中的GetParameterAtPoint的2点总结
  • 昨天愚人节,互联网大厂都恶搞了啥?
  • 做游戏测试达人,其实是“高薪+玩嗨+技术活”的体验
  • 男生追美眉经验总结:关键时刻 该出手时就出手
  • 社会工程学——无法忽略的另类安全
  • java在表格单元中加入button按钮_java – 在表中实现一个按钮,该按钮使用Th...
  • java中怎么写一个返回按钮_java – 在表中实现一个按钮,该按钮使用Thymeleaf和Spring Boot从表中返回数据...

5个令人兴奋的机器学习深度技术项目相关推荐

  1. 如何在实际中计划和执行一个机器学习和深度学习项目

    2019-11-27 20:27:28 作者:Sayak Paul 编译:ronghuaiyang 导读 做研究打比赛和真正的做一个机器学习和深度项目是不一样的,如果你有这方面的困惑的话,可以看看这篇 ...

  2. 腾讯云机器学习平台技术负责人:揭秘深度学习平台DI-X背后的秘密

    本文转自:https://news.cnblogs.com/n/566061/ 腾讯正式发布了旗下深度学习平台 DI-X(Data Intelligence X),这是继今年 1 月推出 FPGA 云 ...

  3. 机器学习 深度学习技术区别_体育技术机器学习金钱和灵感的圣杯

    机器学习 深度学习技术区别 I claim that anyone with a math background and a love of baseball has at one time give ...

  4. AI:人工智能领域主要方向(技术和应用)、与机器学习/深度学习的关系、数据科学关键技术与知识发现/数据挖掘/统计学/模式识别/神经计算学/数据库的关系(几张图理清之间的暧昧关系)

    AI:人工智能领域主要方向(技术和应用).与机器学习/深度学习的关系.数据科学关键技术与知识发现/数据挖掘/统计学/模式识别/神经计算学/数据库的关系(几张图理清之间的暧昧关系) 目录 人工智能主要领 ...

  5. 大数据基石python学习_资源 | 177G Python/机器学习/深度学习/算法/TensorFlow等视频,涵盖入门/中级/项目各阶段!...

    原标题:资源 | 177G Python/机器学习/深度学习/算法/TensorFlow等视频,涵盖入门/中级/项目各阶段! 这是一份比较全面的视频教程,基本上包括了市面上所有关于机器学习,统计学习, ...

  6. 凸优化 机器学习 深度学习_我应该在机器学习项目中使用哪个优化程序

    凸优化 机器学习 深度学习 This article provides a summary of popular optimizers used in computer vision, natural ...

  7. 手把手教你从零到一搭建深度学习项目(附PDF下载)

    来源:机器之心 作者:Jonathan Hui 本文约14000字,建议阅读10+分钟. 本文将会从第一步开始,告诉你如何解决深度学习项目开发中会遇到的各类问题. 在学习了有关深度学习的理论之后,很多 ...

  8. 【收藏】23个机器学习最佳入门项目(附源代码)

    点击上方"AI遇见机器学习",选择"星标"公众号 重磅干货,第一时间送达 整理:深度学习技术前沿 [导读]本文为你介绍23种机器学习项目创意,以获取有关该增长技 ...

  9. 人工智能,机器学习,深度学习入门好文,强烈推荐

    让我们从机器学习谈起 导读:在本篇文章中,将对机器学习做个概要的介绍.本文的目的是能让即便完全不了解机器学习的人也能了解机器学习,并且上手相关的实践.当然,本文也面对一般读者,不会对阅读有相关的前提要 ...

最新文章

  1. nginx加密,访问接口认证
  2. 分布式理论(七):一致性协议之 ZAB
  3. 计算机如何退出1.1.2模式,华为fastboot模式怎么进入和退出?
  4. java 压缩二进制流_Java:自己动手写压缩软件,超详细解释(哈夫曼实现)
  5. “腾讯也感受到危机了!”
  6. windows server 2008 添加磁盘
  7. 容器技术Docker K8s 41 Serverless Kubernetes(ASK)详解-ASK集群管理
  8. c语言实现文件名随时间变化,WizTree 最好用的磁盘文件分析工具
  9. c语言求阶乘的两种算法(递归和循环)
  10. 如何关闭Windows自动更新
  11. 凸优化之共轭函数(3)
  12. firefox 扩展
  13. java 月份缩写_关于java:如何将日期字符串解析为Date?
  14. 各种分布(distribution)
  15. Linux系统 ISO下载
  16. ACP学习笔记(云计算)附题
  17. Java 定时器 Timer 原理解析
  18. GOIP网关常用名字,卡槽数量,型号区别
  19. Citespace和vosviewer文献计量学可视化SCI论文高效写作方法
  20. Revit中如何自定义快捷键

热门文章

  1. 2020(广东)首届功能性农业·农业大健康大会国际农民丰收节贸易会
  2. 计算机原理与应用第五章——半导体存储器
  3. 速刷html一周目(下)
  4. 迅为4412开发板Linux字符设备控制
  5. VCSA 上的 vmware-vpxd 服务失败,Web端提示503 服务不可用 (503 Service Unavailable)
  6. mac安装git系列报错解决
  7. mysql 语句中 sum函数求和 null 变 0
  8. LeetCode刷题—第一周
  9. vol.146 生活这么苦,为什么你们还要玩更苦的游戏
  10. 注解(注解的概念、注解的分类、自定义注解、注解的原理)