文章目录

  • 简介
  • 安装
  • 数据可视化
    • 浏览数据
    • 主成分分析
  • 训练模型并保存
  • 加载模型并预测
  • 使用最新 EfficientNet 权重
  • 封装
  • 只取部分数据集训练
  • 不同EfficientNet模型大小
  • 参考文献

简介

EfficientNet 作为 2019 年最有效的模型之一,只需要最少的 FLOPS 进行推理,在 ImageNet 和常见的图像分类迁移学习任务上都达到了最先进的精度。

通过放大 EfficientNets 基础模型,获得了一系列 EfficientNets 模型。该系列模型在效率和准确性上战胜了之前所有的卷积神经网络模型。尤其是 EfficientNet-B7 在 ImageNet 数据集上得到了 top-1准 确率 84.4% 和 top-5 准确率 97.1% 的结果。且它和当时准确率最高的其它模型对比,大小缩小了 8.4 倍,效率提高了 6.1 倍。且通过迁移学习,EfficientNets 在多个知名数据集上均达到了当时最先进的水平。

论文:EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

使用预训练模型 EfficientNet 对 Stanford Dogs Dataset 进行图像分类,该数据集包含世界各地 120 种狗的图像,共有 20580 张,其中 12000 张用于训练,8580 张用于测试。

本文准确率达到 79.70%,在 EfficientNet基础上达到 82.20%,根据 Papers With Code,2021 年的 TransFG 准确率达到 92.3%

安装

pip install tensorflow-gpu==2.3.0

下载数据集

import tensorflow_datasets as tfdstfds.load('stanford_dogs', with_info=True, as_supervised=True)

已上传百度网盘(snv2)

存放路径:

  • Windows:C:\Users\Administrator\tensorflow_datasets
  • Linux:/home/<用户名>/tensorflow_datasets/

数据可视化

此部分可跳过

浏览数据

import matplotlib.pyplot as plt
import tensorflow_datasets as tfds(ds_train, ds_test), ds_info = tfds.load('stanford_dogs', split=['train', 'test'], with_info=True, as_supervised=True)label_info = ds_info.features['label']
for i, (image, label) in enumerate(ds_train.take(9)):class_name = label_info.int2str(label).split('-')[1]ax = plt.subplot(3, 3, i + 1)plt.imshow(image.numpy())plt.title(class_name)plt.axis('off')
plt.show()

类别之间非常相似

主成分分析

import numpy as np
import matplotlib.pyplot as plt
from sklearn import decomposition
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.applications.vgg16 import VGG16(ds_train, ds_test), ds_info = tfds.load('stanford_dogs', split=['train', 'test'], with_info=True, as_supervised=True)
class_names = ds_info.features['label'].names
size = (224, 224)
num_classes = len(class_names)def input_preprocess(image, label):image = tf.image.resize(image, size)image = image / 255.0label = tf.one_hot(label, num_classes)return image, labelds_train = ds_train.map(input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_labels = np.array([np.argmax(y) for x, y in ds_train], dtype='int32')
ds_train = ds_train.batch(batch_size=64)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)model = VGG16(include_top=False, weights='imagenet')
train_features = model.predict(ds_train)  # 训练集特征
print(train_features.shape)n_train, x, y, z = train_features.shape  # 样本数、高度、宽度、通道数
pca = decomposition.PCA(n_components=2)  # 主成分分析,利用奇异值分解将数据投影到低维空间
X = train_features.reshape((n_train, x * y * z))
pca.fit(X)
C = pca.transform(X)
C1 = C[:, 0]
C2 = C[:, 1]plt.figure(figsize=(10, 10))
for i, class_name in enumerate(class_names):plt.scatter(C1[train_labels == i][:1000], C2[train_labels == i][:1000], label=class_name, alpha=0.4)
plt.legend()
plt.title('PCA Projection')
plt.show()


可以看出类别之间的相似度是非常高的

训练模型并保存

import json
import datetime
from pathlib import Pathimport tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.layers.experimental import preprocessing# 数据集
IMG_SIZE = 224
batch_size = 64
(ds_train, ds_test), ds_info = tfds.load('stanford_dogs', split=['train', 'test'], with_info=True, as_supervised=True)
num_classes = ds_info.features['label'].num_classes
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
class_names = ds_info.features['label'].names
json.dump(class_names, open('class_names.json', mode='w'))  # 保存分类信息
print(class_names)def input_preprocess(image, label):label = tf.one_hot(label, num_classes)return image, labelds_train = ds_train.map(input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)def build_model(num_classes):"""创建并编译模型"""inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))img_augmentation = Sequential([preprocessing.RandomRotation(factor=0.15),preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),preprocessing.RandomFlip(),preprocessing.RandomContrast(factor=0.1),])x = img_augmentation(inputs)model = EfficientNetB0(include_top=False, input_tensor=x, weights='imagenet')model.trainable = Falsex = layers.GlobalAveragePooling2D()(model.output)x = layers.BatchNormalization()(x)x = layers.Dropout(0.2)(x)outputs = layers.Dense(num_classes, activation='softmax')(x)model = tf.keras.Model(inputs, outputs)for layer in model.layers[-20:]:if not isinstance(layer, layers.BatchNormalization):layer.trainable = Truemodel.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),loss='categorical_crossentropy',metrics=['accuracy'])return model# 回调函数
Path('models').mkdir(parents=True, exist_ok=True)
filepath = 'models/best_{}.h5'.format(datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True),  # 保存模型tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=1),  # 训练多次没有提升就降低学习率tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=25, verbose=1),  # 训练多次没有提升就停止
]model = build_model(num_classes)
history = model.fit(ds_train, epochs=10000, validation_data=ds_test, callbacks=callbacks)
print(filepath)
# Epoch 00001: val_loss improved from inf to 1.96004, saving model to models/best_20210722183223.h5
# 187/187 [==============================] - 23s 125ms/step - loss: 4.1022 - accuracy: 0.1308 - val_loss: 1.9600 - val_accuracy: 0.5384
# ...
# Epoch 15/10000
# 187/187 [==============================] - ETA: 0s - loss: 0.9633 - accuracy: 0.7127
# Epoch 00015: val_loss improved from 0.68442 to 0.68409, saving model to models/best_20210722183223.h5
# 187/187 [==============================] - 22s 119ms/step - loss: 0.9633 - accuracy: 0.7127 - val_loss: 0.6841 - val_accuracy: 0.7874
# ...
# Epoch 00040: early stopping
# models/best_20210722183223.h5

验证集准确率达到 78.74%

使用 EfficientNetB2 可以达到 82.29%,不过训练时间增加,模型也变大了

加载模型并预测

下载原始数据集 Stanford Dogs Dataset,已上传百度网盘(ny5b)

解压 images.tar

将模型命名为model.h5

import time
import json
import random
import pathlibimport numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.python.keras.preprocessing import image# 加载类名
class_names = json.load(open('class_names.json'))
print(class_names)# 加载模型
start = time.clock()
model = tf.keras.models.load_model('model.h5')
print('Warming up took {:.2f}s'.format(time.clock() - start))# 预测
image_size = 224  # 图像大小
paths = list(pathlib.Path('Images').rglob('*'))  # 测试集所有文件
while True:path = random.choice(paths)x = image.load_img(path=path, target_size=(image_size, image_size))plt.imshow(x)plt.show()x = image.img_to_array(x)x = np.expand_dims(x, axis=0)start = time.clock()y = model.predict(x)  # 预测print('Prediction took {:.2f}s'.format(time.clock() - start))# 置信度print('{}'.format(path.parent.name), end=' ')  # 原标签for i in np.argsort(y[0])[::-1]:print('{}: {:.2f}%'.format(class_names[i], y[0][i] * 100), end=' ')print()q = input('回车继续,q退出')if q == 'q':break
# Warming up took 2.08s
# Prediction took 1.31s
# n02089867-Walker_hound n02089867-walker_hound: 97.75% n02088238-basset: 2.14% n02089973-english_foxhound: 0.07% n02088364-beagle: 0.03% n02088466-bloodhound: 0.00% n02090379-redbone: 0.00% n02110806-basenji: 0.00% n02107574-greater_swiss_mountain_dog: 0.00% n02109047-great_dane: 0.00% n02109525-saint_bernard: 0.00% n02091134-whippet: 0.00% n02087046-toy_terrier: 0.00% n02093428-american_staffordshire_terrier: 0.00% n02091244-ibizan_hound: 0.00% n02108089-boxer: 0.00% n02111500-great_pyrenees: 0.00% n02088632-bluetick: 0.00% n02095889-sealyham_terrier: 0.00% n02100583-vizsla: 0.00% n02100735-english_setter: 0.00% n02113799-standard_poodle: 0.00% n02091032-italian_greyhound: 0.00% n02085782-japanese_spaniel: 0.00% n02092339-weimaraner: 0.00% n02087394-rhodesian_ridgeback: 0.00% n02101388-brittany_spaniel: 0.00% n02108915-french_bulldog: 0.00% n02089078-black-and-tan_coonhound: 0.00% n02091831-saluki: 0.00% n02102177-welsh_springer_spaniel: 0.00% n02099712-labrador_retriever: 0.00% n02090622-borzoi: 0.00% n02098105-soft-coated_wheaten_terrier: 0.00% n02108422-bull_mastiff: 0.00% n02093647-bedlington_terrier: 0.00% n02105505-komondor: 0.00% n02100877-irish_setter: 0.00% n02108000-entlebucher: 0.00% n02113186-cardigan: 0.00% n02098413-lhasa: 0.00% n02085620-chihuahua: 0.00% n02110185-siberian_husky: 0.00% n02105855-shetland_sheepdog: 0.00% n02086646-blenheim_spaniel: 0.00% n02100236-german_short-haired_pointer: 0.00% n02086910-papillon: 0.00% n02086240-shih-tzu: 0.00% n02107908-appenzeller: 0.00% n02096585-boston_bull: 0.00% n02110063-malamute: 0.00% n02115641-dingo: 0.00% n02113023-pembroke: 0.00% n02090721-irish_wolfhound: 0.00% n02105162-malinois: 0.00% n02115913-dhole: 0.00% n02093256-staffordshire_bullterrier: 0.00% n02113712-miniature_poodle: 0.00% n02108551-tibetan_mastiff: 0.00% n02102318-cocker_spaniel: 0.00% n02111889-samoyed: 0.00% n02106550-rottweiler: 0.00% n02104365-schipperke: 0.00% n02105251-briard: 0.00% n02099429-curly-coated_retriever: 0.00% n02105412-kelpie: 0.00% n02099601-golden_retriever: 0.00% n02091635-otterhound: 0.00% n02106030-collie: 0.00% n02098286-west_highland_white_terrier: 0.00% n02097474-tibetan_terrier: 0.00% n02105641-old_english_sheepdog: 0.00% n02099849-chesapeake_bay_retriever: 0.00% n02109961-eskimo_dog: 0.00% n02111129-leonberg: 0.00% n02099267-flat-coated_retriever: 0.00% n02113978-mexican_hairless: 0.00% n02107312-miniature_pinscher: 0.00% n02102040-english_springer: 0.00% n02102973-irish_water_spaniel: 0.00% n02094114-norfolk_terrier: 0.00% n02086079-pekinese: 0.00% n02088094-afghan_hound: 0.00% n02112706-brabancon_griffon: 0.00% n02092002-scottish_deerhound: 0.00% n02112350-keeshond: 0.00% n02107683-bernese_mountain_dog: 0.00% n02097130-giant_schnauzer: 0.00% n02105056-groenendael: 0.00% n02107142-doberman: 0.00% n02096294-australian_terrier: 0.00% n02093991-irish_terrier: 0.00% n02106662-german_shepherd: 0.00% n02110958-pug: 0.00% n02113624-toy_poodle: 0.00% n02116738-african_hunting_dog: 0.00% n02096051-airedale: 0.00% n02111277-newfoundland: 0.00% n02104029-kuvasz: 0.00% n02094433-yorkshire_terrier: 0.00% n02093754-border_terrier: 0.00% n02095314-wire-haired_fox_terrier: 0.00% n02097658-silky_terrier: 0.00% n02112137-chow: 0.00% n02097298-scotch_terrier: 0.00% n02101556-clumber: 0.00% n02102480-sussex_spaniel: 0.00% n02097047-miniature_schnauzer: 0.00% n02096177-cairn: 0.00% n02112018-pomeranian: 0.00% n02110627-affenpinscher: 0.00% n02094258-norwich_terrier: 0.00% n02106166-border_collie: 0.00% n02096437-dandie_dinmont: 0.00% n02095570-lakeland_terrier: 0.00% n02101006-gordon_setter: 0.00% n02093859-kerry_blue_terrier: 0.00% n02091467-norwegian_elkhound: 0.00% n02085936-maltese_dog: 0.00% n02106382-bouvier_des_flandres: 0.00% n02097209-standard_schnauzer: 0.00%
# 回车继续,q退出

使用最新 EfficientNet 权重

阅读:Image classification via fine-tuning with EfficientNet

下载 noisy_student_efficientnet-b1.tar.gz

封装

以 Stanford Dogs Dataset 为例,下载 images.tar 并解压

1. 划分数据集

pip install split-folders
import splitfolderssplitfolders.ratio(input='Images', output='output')

2. 训练模型并保存

import json
import datetime
from pathlib import Pathimport tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.preprocessing import image_dataset_from_directory# 数据集
img_size = 224
batch_size = 64
image_size = (224, 224)
input_shape = image_size + (3,)train_dir = 'output/train'  # 训练集目录
validation_dir = 'output/val'  # 验证集目录train_dataset = image_dataset_from_directory(train_dir, batch_size=batch_size, image_size=image_size)
validation_dataset = image_dataset_from_directory(validation_dir, batch_size=batch_size, image_size=image_size)
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_data = train_dataset.prefetch(AUTOTUNE)
validation_data = validation_dataset.prefetch(AUTOTUNE)class_names = train_dataset.class_names  # 类别自动根据目录命名
json.dump(class_names, open('class_names.json', mode='w'))  # 保存分类信息
num_classes = len(class_names)
print('共{}类'.format(num_classes))def build_model(num_classes):"""创建并编译模型"""inputs = layers.Input(shape=input_shape)img_augmentation = Sequential([preprocessing.RandomRotation(factor=0.15),preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),preprocessing.RandomFlip(),preprocessing.RandomContrast(factor=0.1),])x = img_augmentation(inputs)model = EfficientNetB0(include_top=False, weights='imagenet', input_tensor=x, input_shape=input_shape)model.trainable = Falsex = layers.GlobalAveragePooling2D()(model.output)x = layers.BatchNormalization()(x)x = layers.Dropout(0.2)(x)outputs = layers.Dense(num_classes, activation='softmax')(x)model = tf.keras.Model(inputs, outputs)for layer in model.layers[-20:]:if not isinstance(layer, layers.BatchNormalization):layer.trainable = Truemodel.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),loss='sparse_categorical_crossentropy',metrics=['sparse_categorical_accuracy'])return model# 回调函数
Path('models').mkdir(parents=True, exist_ok=True)
filepath = 'models/best_{}.h5'.format(datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True),  # 保存模型tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=1),  # 训练多次没有提升就降低学习率tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=25, verbose=1),  # 训练多次没有提升就停止
]model = build_model(num_classes)
history = model.fit(train_data, epochs=10000, validation_data=validation_data, callbacks=callbacks)
print(filepath)# 绘制训练曲线
plt.plot(history.history['sparse_categorical_accuracy'])
plt.plot(history.history['val_sparse_categorical_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

训练过程

# Found 16418 files belonging to 120 classes.
# Found 2009 files belonging to 120 classes.
# Found 2153 files belonging to 120 classes.
# 共120类
# Epoch 1/10000
# 257/257 [==============================] - ETA: 0s - loss: 3.6817 - sparse_categorical_accuracy: 0.1905
# Epoch 00001: val_loss improved from inf to 1.42889, saving model to models/best_20210812121553.h5
# ...
# 257/257 [==============================] - ETA: 0s - loss: 1.1770 - sparse_categorical_accuracy: 0.6570
# Epoch 00010: val_loss improved from 0.72170 to 0.71334, saving model to models/best_20210812121553.h5
# ...
# Epoch 00035: val_loss did not improve from 0.71334
# 257/257 [==============================] - 68s 266ms/step - loss: 0.6706 - sparse_categorical_accuracy: 0.7985 - val_loss: 0.7164 - val_sparse_categorical_accuracy: 0.7979
# Epoch 00035: early stopping
# models/best_20210812121553.h5


3. 评估

将模型重命名为 model.h5,大小为 26.4 MB

import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directorybatch_size = 64
image_size = (224, 224)
test_dir = 'output/test'  # 测试集目录
test_dataset = image_dataset_from_directory(test_dir, batch_size=batch_size, image_size=image_size)  # 测试集
test_data = test_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model = tf.keras.models.load_model('model.h5')
loss, accuracy = model.evaluate(test_data)
print('Test accuracy: {:.2f}% loss: {:.2f}'.format(accuracy * 100, loss))
# Found 2153 files belonging to 120 classes.
# 34/34 [==============================] - 16s 468ms/step - loss: 0.6626 - sparse_categorical_accuracy: 0.7970
# Test accuracy: 79.70% loss: 0.66

4. 加载模型并预测

import time
import json
import random
import pathlibimport numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.python.keras.preprocessing import image# 加载类名
class_names = json.load(open('class_names.json'))
print(class_names)# 加载模型
start = time.clock()
model = tf.keras.models.load_model('model.h5')
print('Warming up took {:.2f}s'.format(time.clock() - start))# 预测
image_size = 224  # 图像大小
paths = list(pathlib.Path('output/test').rglob('*'))  # 测试集所有文件
while True:path = random.choice(paths)x = image.load_img(path=path, target_size=(image_size, image_size))plt.imshow(x)plt.show()x = image.img_to_array(x)x = np.expand_dims(x, axis=0)start = time.clock()y = model.predict(x)  # 预测print('Prediction took {:.2f}s'.format(time.clock() - start))# 置信度print('{}'.format(path.parent.name), end=' ')  # 原标签for i in np.argsort(y[0])[::-1]:print('{}: {:.2f}%'.format(class_names[i], y[0][i] * 100), end=' ')print()q = input('回车继续,q退出')if q == 'q':break
# Warming up took 2.34s
# Prediction took 0.10s
# n02115641-dingo n02115641-dingo: 99.60% n02115913-dhole: 0.18% n02091244-Ibizan_hound: 0.07% n02105412-kelpie: 0.06% n02109961-Eskimo_dog: 0.05% n02091831-Saluki: 0.02% n02091134-whippet: 0.01% n02110185-Siberian_husky: 0.00% n02093428-American_Staffordshire_terrier: 0.00% n02113023-Pembroke: 0.00% n02116738-African_hunting_dog: 0.00% n02110806-basenji: 0.00% n02095314-wire-haired_fox_terrier: 0.00% n02089973-English_foxhound: 0.00% n02113186-Cardigan: 0.00% n02088466-bloodhound: 0.00% n02099601-golden_retriever: 0.00% n02105162-malinois: 0.00% n02093991-Irish_terrier: 0.00% n02093647-Bedlington_terrier: 0.00% n02099712-Labrador_retriever: 0.00% n02099849-Chesapeake_Bay_retriever: 0.00% n02093256-Staffordshire_bullterrier: 0.00% n02085620-Chihuahua: 0.00% n02089867-Walker_hound: 0.00% n02111889-Samoyed: 0.00% n02109525-Saint_Bernard: 0.00% n02088238-basset: 0.00% n02091032-Italian_greyhound: 0.00% n02108915-French_bulldog: 0.00% n02095570-Lakeland_terrier: 0.00% n02094258-Norwich_terrier: 0.00% n02104029-kuvasz: 0.00% n02105056-groenendael: 0.00% n02106662-German_shepherd: 0.00% n02090622-borzoi: 0.00% n02090721-Irish_wolfhound: 0.00% n02098286-West_Highland_white_terrier: 0.00% n02094114-Norfolk_terrier: 0.00% n02087394-Rhodesian_ridgeback: 0.00% n02104365-schipperke: 0.00% n02113978-Mexican_hairless: 0.00% n02107574-Greater_Swiss_Mountain_dog: 0.00% n02092339-Weimaraner: 0.00% n02095889-Sealyham_terrier: 0.00% n02111500-Great_Pyrenees: 0.00% n02096585-Boston_bull: 0.00% n02088364-beagle: 0.00% n02090379-redbone: 0.00% n02109047-Great_Dane: 0.00% n02093754-Border_terrier: 0.00% n02106030-collie: 0.00% n02096177-cairn: 0.00% n02087046-toy_terrier: 0.00% n02102480-Sussex_spaniel: 0.00% n02110063-malamute: 0.00% n02107142-Doberman: 0.00% n02097047-miniature_schnauzer: 0.00% n02112706-Brabancon_griffon: 0.00% n02107908-Appenzeller: 0.00% n02108000-EntleBucher: 0.00% n02099267-flat-coated_retriever: 0.00% n02106166-Border_collie: 0.00% n02097298-Scotch_terrier: 0.00% n02100735-English_setter: 0.00% n02096437-Dandie_Dinmont: 0.00% n02091635-otterhound: 0.00% n02101388-Brittany_spaniel: 0.00% n02107312-miniature_pinscher: 0.00% n02092002-Scottish_deerhound: 0.00% n02106550-Rottweiler: 0.00% n02111277-Newfoundland: 0.00% n02093859-Kerry_blue_terrier: 0.00% n02108422-bull_mastiff: 0.00% n02100877-Irish_setter: 0.00% n02102177-Welsh_springer_spaniel: 0.00% n02112018-Pomeranian: 0.00% n02113799-standard_poodle: 0.00% n02089078-black-and-tan_coonhound: 0.00% n02106382-Bouvier_des_Flandres: 0.00% n02097209-standard_schnauzer: 0.00% n02110958-pug: 0.00% n02111129-Leonberg: 0.00% n02105505-komondor: 0.00% n02096294-Australian_terrier: 0.00% n02108551-Tibetan_mastiff: 0.00% n02108089-boxer: 0.00% n02112137-chow: 0.00% n02096051-Airedale: 0.00% n02088632-bluetick: 0.00% n02099429-curly-coated_retriever: 0.00% n02085936-Maltese_dog: 0.00% n02102318-cocker_spaniel: 0.00% n02100583-vizsla: 0.00% n02110627-affenpinscher: 0.00% n02097658-silky_terrier: 0.00% n02085782-Japanese_spaniel: 0.00% n02086646-Blenheim_spaniel: 0.00% n02102973-Irish_water_spaniel: 0.00% n02107683-Bernese_mountain_dog: 0.00% n02098105-soft-coated_wheaten_terrier: 0.00% n02105855-Shetland_sheepdog: 0.00% n02097130-giant_schnauzer: 0.00% n02102040-English_springer: 0.00% n02098413-Lhasa: 0.00% n02105641-Old_English_sheepdog: 0.00% n02091467-Norwegian_elkhound: 0.00% n02101556-clumber: 0.00% n02101006-Gordon_setter: 0.00% n02113712-miniature_poodle: 0.00% n02113624-toy_poodle: 0.00% n02086910-papillon: 0.00% n02086240-Shih-Tzu: 0.00% n02094433-Yorkshire_terrier: 0.00% n02088094-Afghan_hound: 0.00% n02112350-keeshond: 0.00% n02100236-German_short-haired_pointer: 0.00% n02105251-briard: 0.00% n02097474-Tibetan_terrier: 0.00% n02086079-Pekinese: 0.00%
# 回车继续,q退出

monitor 为 val_loss 可替换为 val_sparse_categorical_accuracy

只取部分数据集训练

import json
import datetime
from pathlib import Pathimport tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.preprocessing import image_dataset_from_directory# 数据集
img_size = 224
batch_size = 64
image_size = (224, 224)
input_shape = image_size + (3,)train_dir = 'output/train'  # 训练集目录
validation_dir = 'output/val'  # 验证集目录train_dataset = image_dataset_from_directory(train_dir, batch_size=batch_size, image_size=image_size)
validation_dataset = image_dataset_from_directory(validation_dir, batch_size=batch_size, image_size=image_size)
train_data = train_dataset.take(tf.data.experimental.cardinality(train_dataset) // 10)  # 取10%
validation_data = validation_dataset.take(tf.data.experimental.cardinality(validation_dataset) // 10)  # 取10%
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_data = train_data.prefetch(AUTOTUNE)
validation_data = validation_data.prefetch(AUTOTUNE)class_names = train_dataset.class_names  # 类别自动根据目录命名
json.dump(class_names, open('class_names.json', mode='w'))  # 保存分类信息
num_classes = len(class_names)
print('共{}类'.format(num_classes))def build_model(num_classes):"""创建并编译模型"""inputs = layers.Input(shape=input_shape)img_augmentation = Sequential([preprocessing.RandomRotation(factor=0.15),preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),preprocessing.RandomFlip(),preprocessing.RandomContrast(factor=0.1),])x = img_augmentation(inputs)model = EfficientNetB0(include_top=False, weights='imagenet', input_tensor=x, input_shape=input_shape)model.trainable = Falsex = layers.GlobalAveragePooling2D()(model.output)x = layers.BatchNormalization()(x)x = layers.Dropout(0.2)(x)outputs = layers.Dense(num_classes, activation='softmax')(x)model = tf.keras.Model(inputs, outputs)for layer in model.layers[-20:]:if not isinstance(layer, layers.BatchNormalization):layer.trainable = Truemodel.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),loss='sparse_categorical_crossentropy',metrics=['sparse_categorical_accuracy'])return model# 回调函数
Path('models').mkdir(parents=True, exist_ok=True)
filepath = 'models/best_{}.h5'.format(datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True),  # 保存模型tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=1),  # 训练多次没有提升就降低学习率tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=25, verbose=1),  # 训练多次没有提升就停止
]model = build_model(num_classes)
history = model.fit(train_data, epochs=10000, validation_data=validation_data, callbacks=callbacks)
print(filepath)# 绘制训练曲线
plt.plot(history.history['sparse_categorical_accuracy'])
plt.plot(history.history['val_sparse_categorical_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

不同EfficientNet模型大小

模型 模型大小 模型层数
EfficientNetB0 15.93 MB 237
EfficientNetB1 25.57 MB 339
EfficientNetB2 30.32 MB 339
EfficientNetB3 41.91 MB 384
EfficientNetB4 68.37 MB 474
EfficientNetB5 109.92 MB 576
EfficientNetB6 157.58 MB 666
EfficientNetB7 246.12 MB 813

参考文献

  1. Image classification via fine-tuning with EfficientNet
  2. tf.keras.applications.efficientnet.EfficientNetB0
  3. TensorFlow Dataset下载速度慢手动下载替换数据集
  4. ValueError: Shapes (None, 1) and (None, 50) are incompatible
  5. Stanford Dogs Benchmark (Fine-Grained Image Classification) | Papers With Code
  6. GPU only being used 1-5% Tensorflow-gpu and Keras
  7. Google AI Blog: EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling
  8. EfficientNet详细解读
  9. Retraining an Image Classifier | TensorFlow Hub
  10. TensorFlow Hub
  11. ‘KerasLayer’ object has no attribute ‘shape’
  12. Transfer learning with TensorFlow Hub

TensorFlow2微调EfficientNet相关推荐

  1. 谷歌EfficientNet高效卷积网络的学习和使用

    文章目录 0 引言 1 论文学习 1.1 摘要 1.2 介绍 1.3 相关工作 1.3 复合模型扩展 1.3.1 问题公式化 1.3.2 扩展维度 1.3.3 复合比例 1.4 EfficientNe ...

  2. 独家 | 使EfficientNet更有效率的三种方法(附链接)

    作者:Dominic Masters翻译:王可汗校对:欧阳锦本文约3300字,建议阅读5分钟本文为大家介绍了提升EffcientNet效率和性能的三个策略. 在实践中有更好性能的EfficientNe ...

  3. GPU端吊打RegNet、EfficientNet的强悍担当:GENet

    编辑:Happy 首发:AIWalker公众号 来源:GPU端精度最高速度最快的强悍担当:GENet 日期:2020-06-27 [Happy导语]该文是阿里巴巴提出了一种GPU端高效&高精度 ...

  4. 谷歌出品EfficientNet:比现有卷积网络小84倍,比GPipe快6.1倍

    https://www.toutiao.com/a6697763565677314573/ [新智元导读]谷歌AI研究部门华人科学家再发论文<EfficientNet:重新思考CNN模型缩放&g ...

  5. 【小白学PyTorch】扩展之Tensorflow2.0 | 21 Keras的API详解(下)池化、Normalization

    <<小白学PyTorch>> 扩展之Tensorflow2.0 | 21 Keras的API详解(上)卷积.激活.初始化.正则 扩展之Tensorflow2.0 | 20 TF ...

  6. 【小白学PyTorch】扩展之Tensorflow2.0 | 21 Keras的API详解(上)卷积、激活、初始化、正则...

    [机器学习炼丹术]的学习笔记分享 <<小白学PyTorch>> 扩展之Tensorflow2.0 | 20 TF2的eager模式与求导 扩展之Tensorflow2.0 | ...

  7. 【深度学习】CNN图像分类:从LeNet5到EfficientNet

    深度学习 Author:louwill From:深度学习笔记 在对卷积的含义有了一定的理解之后,我们便可以对CNN在最简单的计算机视觉任务图像分类中的经典网络进行探索.CNN在近几年的发展历程中,从 ...

  8. 【小白学PyTorch】扩展之Tensorflow2.0 | 20 TF2的eager模式与求导

    [机器学习炼丹术]的学习笔记分享 <<小白学PyTorch>> 扩展之Tensorflow2.0 | 19 TF2模型的存储与载入 扩展之Tensorflow2.0 | 18 ...

  9. 清华提出LogME,无需微调就能衡量预训练模型的下游任务表现!

    文 | 游凯超 源 | THUML 引言 在深度学习时代,神经网络的参数量越来越大,从头开始训练(train from scratch)的成本也越来越大.幸运的是,在计算机视觉.自然语言处理等人工智能 ...

最新文章

  1. 16进制/10进制数转化为浮点型案例
  2. 数据结构和算法 —— 图
  3. 我的notepad++
  4. 如何编写NetBeans插件
  5. JSONArray传值的使用小结
  6. 解决eclipse中Findbugs检查不生效的问题
  7. 洛谷P2486 [SDOI2011]染色(树链剖分+线段树判断边界)
  8. 目标检测——标注文件的格式设计
  9. android图像处理(3) 底片效果
  10. C++学习笔记(十二):重载函数
  11. java中的jackson_Java中的JSON数据绑定框架Jackson使用介绍
  12. python3web库_基于 Python3 写的极简版 webserver
  13. Apache 配置多端口网站
  14. HDI与普通PCB的4点主要区别
  15. 简述et代理换ip软件网络功能。
  16. linux 验证码 权限,linux 上验证码无法显示
  17. PPT:PowerPoint to Flash SDK:SWF
  18. AI基础:图解Transformer
  19. python 二进制写入字典_Python模块之pickle(列表,字典等复杂数据类型与二进制文件的转化)...
  20. 一个html文档必须有,创建一个完整的HTML文档总结

热门文章

  1. 联想电脑的一些官方小工具(exe)
  2. 2015-07-20-struts-struts2简介
  3. IDEA初学者保存就格式化代码插件save actions
  4. mysql主从同步加密_教你构建MySQL主从结构,实现基于SSL加密的主从同步机制
  5. desktop引用了一个不可用的位置
  6. GO学习笔记:struct的匿名字段
  7. 韩云 计算机世界,韩云计算机辅助工艺过程设计.ppt
  8. Arcgis计算坡度问题
  9. 什么是视频点播(VOD)?
  10. Linux多线程编程:pthread线程创建、退出、回收、分离、取消