17_2Representation Learning and Generative Learning Deep Convolutional_Progressive Growing Style GAN
17_Representation Tying权重 CNN RNN denoising Sparse Autoencoder_潜在loss_accuracy_TSNE_KL Divergence_L1_hasing Autoencoders : https://blog.csdn.net/Linli522362242/article/details/116576478
cp17_GAN for Synthesizing Data_fully connected layer 2 convolutional_colab_ax.transAxes_twiny_spine : https://blog.csdn.net/Linli522362242/article/details/116565829
cp17_2GAN for Synthesizing_upsample_Transposed_Batch normalization_DCGAN(transposed convolution in GAN)_KL_JS divergence_双轴_EM_tape : https://blog.csdn.net/Linli522362242/article/details/117370337
Generative Adversarial Networks
Generative adversarial networks were proposed in a 2014 paper(Ian Goodfellow et al., “Generative Adversarial Nets,” Proceedings of the 27th International Conference on Neural
Information Processing Systems 2 (2014): 2672–2680.) by Ian Goodfellow et al., and although the idea got researchers excited almost instantly, it took a few years to overcome some of the difficulties of training GANs. Like many great ideas, it seems simple in hindsight事后诸葛亮: make neural networks compete against each other in the hope that this competition will push them to excel. As shown in Figure 17-15, a GAN is composed of two neural networks:
- Generator
Takes a random distribution as input (typically Gaussian) and outputs some data—typically, an image. You can think of the random inputs as the latent representations (i.e., codings) of the image to be generated. So, as you can see, the generator offers the same functionality as a decoder in a variational autoencoder, and it can be used in the same way to generate new images (just feed it some Gaussian noise, and it outputs a brand-new image). However, it is trained very differently, as we will soon see.
- Discriminator
Takes either a fake image from the generator or a real image from the training set as input, and must guess whether the input image is fake or real.
Figure 17-15. A generative adversarial network
During training, the generator and the discriminator have opposite goals: the discriminator tries to tell fake images from real images, while the generator tries to produce images that look real enough to trick the discriminator. Because the GAN is composed of two networks with different objectives, it cannot be trained like a regular neural network. Each training iteration is divided into two phases:
- • In the first phase, we train the discriminator. A batch of real images is sampled from the training set and is completed with an equal number of fake images produced by the generator. The labels are set to 0 for fake images and 1 for real images, and the discriminator is trained on this labeled batch for one step, using the binary cross-entropy loss. Importantly, backpropagation only optimizes the weights of the discriminator during this phase.
- • In the second phase, we train the generator. We first use it to produce another batch of fake images, and once again the discriminator is used to tell whether the images are fake or real. This time we do not add real images in the batch, and all the labels are set to 1 (real): in other words, we want the generator to produce images that the discriminator will (wrongly) believe to be real! Crucially, the weights of the discriminator are frozen during this step, so backpropagation only affects the weights of the generator.
The generator never actually sees any real images, yet it gradually learns to produce convincing fake images! All it gets is the gradients flowing back through the discriminator. Fortunately, the better the discriminator gets, the more information about the real images is contained in these secondhand gradients, so the generator can make significant progress.
Let’s go ahead and build a simple GAN for Fashion MNIST.
First, we need to build the generator and the discriminator. The generator is similar to an autoencoder’s decoder, and the discriminator is a regular binary classifier (it takes an image as input and ends with a Dense layer containing a single unit and using the sigmoid activation function). For the second phase of each training iteration, we also need the full GAN model containing the generator followed by the discriminator:
import numpy as np
import tensorflow as tfnp.random.seed(42)
tf.random.set_seed(42)codings_size=30generator = keras.models.Sequential([keras.layers.Dense( 100, activation="selu", input_shape=[codings_size] ),keras.layers.Dense( 150, activation="selu" ),keras.layers.Dense( 28*28, activation="sigmoid"),keras.layers.Reshape([28,28])
])discriminator = keras.models.Sequential([keras.layers.Flatten( input_shape=[28,28] ),keras.layers.Dense( 150, activation="selu" ),keras.layers.Dense( 100, activation="selu" ),keras.layers.Dense( 1, activation="sigmoid" )
])gan = keras.models.Sequential([ generator, discriminator ])
Next, we need to compile these models. As the discriminator is a binary classifier, we can naturally use the binary cross-entropy loss. The generator will only be trained through the gan model, so we do not need to compile it at all. The gan model is also a binary classifier, so it can use the binary cross-entropy loss. Importantly, the discriminator should not be trained during the second phase, so we make it non-trainable before compiling the gan model:
discriminator.compile( loss="binary_crossentropy", optimizer="rmsprop" )discriminator.trainable = False
gan.compile( loss="binary_crossentropy", optimizer="rmsprop" )
The trainable attribute is taken into account by Keras only when compiling a model, so after running this code, the discriminator is trainable if we call its fit() method or its train_on_batch() method (which we will be using), while it is not trainable when we call these methods on the gan model.
Since the training loop is unusual, we cannot use the regular fit() method. Instead, we will write a custom training loop. For this, we first need to create a Dataset to iterate through the images:
from tensorflow import keras(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()X_train_full = X_train_full.astype( np.float32 )/255
X_test = X_test.astype( np.float32 )/255
X_train, X_valid = X_train_full[:-5000], X_train_full[-5000:]
y_train, y_valid = y_train_full[:-5000], y_train_full[-5000:]batch_size = 32
dataset = tf.data.Dataset.from_tensor_slices( X_train ).shuffle( 1000 )
dataset = dataset.batch( batch_size, drop_remainder=True ).prefetch(1)
import matplotlib.pyplot as pltdef plot_multiple_images( images, n_cols=None ):n_cols = n_cols or len(images)n_rows = ( len(images)-1 )//n_cols + 1if images.shape[-1] == 1:images = np.squeeze( images, axis=-1 )plt.figure( figsize=(n_cols, n_rows) )for index, image in enumerate(images):plt.subplot( n_rows, n_cols, index+1 )plt.imshow( image, cmap="binary" )plt.axis("off")
We are now ready to write the training loop. Let’s wrap it in a train_gan() function.
https://blog.csdn.net/Linli522362242/article/details/116565829
As discussed earlier, you can see the two phases at each iteration:
- • In 1st phase, we feed Gaussian noise to the generator to produce fake images(
17_2Representation Learning and Generative Learning Deep Convolutional_Progressive Growing Style GAN相关推荐
- 机器学习(Machine Learning)amp;深度学习(Deep Learning)资料
机器学习(Machine Learning)&深度学习(Deep Learning)资料 機器學習.深度學習方面不錯的資料,轉載. 原作:https://github.com/ty4z2008 ...
- 转【重磅干货整理】机器学习(Machine Learning)与深度学习(Deep Learning)资料汇总
原文出处:http://blog.csdn.net/zhongwen7710/article/details/45331915 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决 ...
- Deep Learning and Shallow Learning
由于 Deep Learning 现在如火如荼的势头,在各种领域逐渐占据 state-of-the-art 的地位,上个学期在一门课的 project 中见识过了 deep learning 的效果, ...
- 论文阅读:Joint Discriminative and Generative Learning for Person Re-identification
pdf: Joint Discriminative and Generative Learning for Person Re-identification github: https://githu ...
- 深度学习和浅层学习 Deep Learning and Shallow Learning
由于 Deep Learning 现在如火如荼的势头,在各种领域逐渐占据 state-of-the-art 的地位,上个学期在一门课的 project 中见识过了 deep learning 的效果, ...
- PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network笔记
前言 论文 "PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversari ...
- COMA(一): Learning to Communicate with Deep Multi-Agent Reinforcement Learning 论文讲解
Learning to Communicate with Deep Multi-Agent Reinforcement Learning 论文讲解 论文链接:https://papers.nips.c ...
- 机器学习(Machine Learning)、深度学习(Deep Learning)、NLP面试中常考到的知识点和代码实现
网址:https://github.com/NLP-LOVE/ML-NLP 此项目是机器学习(Machine Learning).深度学习(Deep Learning).NLP面试中常考到的知识点和代 ...
- 转【面向代码】学习 Deep Learning(二)Deep Belief Nets(DBNs)
[面向代码]学习 Deep Learning(二)Deep Belief Nets(DBNs) http://blog.csdn.net/dark_scope/article/details/9447 ...
最新文章
- 趣解面试高频算法难题:数组中的第K个最大元素
- vue3-network 无效
- LeetCode-46. Permutations
- uboot源码——环境变量
- “产学合作勇创新·协同育人书新篇”贵州理工大数据学院数据科学训练营结题答辩报告会圆满举行...
- the train of thought of collaborative filtering matrix factarization
- UEFI 引导与 BIOS 引导
- mysql 驱动名称_mysql驱动名更新
- 期盼数月的召唤|PaddlePaddle中文文档利剑来袭
- 灵云智能语音识别平台 促进人工智能
- java的访问修饰符
- git还原历史版本代码
- 【openv450-samples】像素点聚类EM 图像聚类目标检测
- 易语言5.9 免狗完美版下载+安装教程
- 计算机声卡的作用和功能,声卡有什么功能
- UninstallPKG 1.1.9 Mac卸载工具
- 网易mumu模拟器的使用
- NX/UG二次开发-曲线-设置2D曲线最小曲率半径
- 【CTO讲堂】企业该如何打造自身的“安全免疫系统”?
- DBMS-关系数据库的设计:范式、函数依赖、分解算法、多值依赖
热门文章
- 机器学习(Machine Learning)amp;深度学习(Deep Learning)资料