写在前面

本文运行在GitHub上下载的YOLOv4代码,使用pytorch框架,在文章尾部贴出运行结果。

关于YOLO V4

YOLOV4是YOLOV3的改进,相对于YOLOV3来说,YOLOV4做到如下改进:

  • 主干特征提取网络 : 由DarkNet53变成CSPDarkNet53
  • 特征金字塔:SPP , PAN
  • 分类回归层:同YOLOV3
  • tricks: Mosaic 增强, Label Smoothing 平滑、 CIOU 、 学习率余弦退火衰减
  • 激活函数:Mish

注:我参考的博客是结合YOLO V3 和 V4的差别进行解析的,因此在此粘贴一个讲解YOLO V3的博客,供以参考:https://blog.csdn.net/weixin_44791964/article/details/105310627

YOLO V4 结构解析

注意,虽然是pytorch框架,但是本文的描述,都将通道数放在最后一个维度。

1.backbone

当输入是416 x 416 时,结构图如下:

当输入是608 x 608 x 3 时, 特征结构如下图所示:

关于主干特征提取网络的改进主要如下:

  1. 对backbone的改变:由yolo v3的darknet53 变成 csp darknet53
  2. 激活函数的改变: 使用Mish

首先对图中左半部分的Resblock_body进行介绍,其为 一次 下采样多次 残差结构的堆叠 构成。而Dark_net 53即由 resblock_body 模块组合而成。

在YOLO V4中,进行如下修改:

  • 将Darknetconv2D的激活函数改成Mish,卷积块从DarknetConv2D_BN_Leaky变成DarknetConv2D_BN_Mish。mish的公式和对应对应图像如下:

  • 第二个修改是对resblock_body的结构进行修改,加上了CSPnet结构。对于CSPnet来说,他的结构并不复杂,即将原来的残差块的堆叠 进行了一个 拆分。 拆成左右两个分支: 主干部分继续原来的残差块的堆叠; 另外一部分 类似一个“残差边”,只经过少量的处理直接连接到最后。代码如下所示:
#---------------------------------------------------#
#   CSPdarknet的结构块
#   存在一个大残差边
#   这个大残差边绕过了很多的残差结构
#---------------------------------------------------#
class Resblock_body(nn.Module):def __init__(self, in_channels, out_channels, num_blocks, first):super(Resblock_body, self).__init__()self.downsample_conv = BasicConv(in_channels, out_channels, 3, stride=2)if first:self.split_conv0 = BasicConv(out_channels, out_channels, 1)self.split_conv1 = BasicConv(out_channels, out_channels, 1)  self.blocks_conv = nn.Sequential(Resblock(channels=out_channels, hidden_channels=out_channels//2),BasicConv(out_channels, out_channels, 1))self.concat_conv = BasicConv(out_channels*2, out_channels, 1)else:self.split_conv0 = BasicConv(out_channels, out_channels//2, 1)self.split_conv1 = BasicConv(out_channels, out_channels//2, 1)self.blocks_conv = nn.Sequential(*[Resblock(out_channels//2) for _ in range(num_blocks)],BasicConv(out_channels//2, out_channels//2, 1))self.concat_conv = BasicConv(out_channels, out_channels, 1)def forward(self, x):x = self.downsample_conv(x)x0 = self.split_conv0(x)x1 = self.split_conv1(x)x1 = self.blocks_conv(x1)x = torch.cat([x1, x0], dim=1)x = self.concat_conv(x)return x

全部实现代码:​​​​​​​

import torch
import torch.nn.functional as F
import torch.nn as nn
import math
from collections import OrderedDict#-------------------------------------------------#
#   MISH激活函数
#-------------------------------------------------#
class Mish(nn.Module):def __init__(self):super(Mish, self).__init__()def forward(self, x):return x * torch.tanh(F.softplus(x))#-------------------------------------------------#
#   卷积块
#   CONV+BATCHNORM+MISH
#-------------------------------------------------#
class BasicConv(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride=1):super(BasicConv, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, kernel_size//2, bias=False)self.bn = nn.BatchNorm2d(out_channels)self.activation = Mish()def forward(self, x):x = self.conv(x)x = self.bn(x)x = self.activation(x)return x#---------------------------------------------------#
#   CSPdarknet的结构块的组成部分
#   内部堆叠的残差块
#---------------------------------------------------#
class Resblock(nn.Module):def __init__(self, channels, hidden_channels=None, residual_activation=nn.Identity()):super(Resblock, self).__init__()if hidden_channels is None:hidden_channels = channelsself.block = nn.Sequential(BasicConv(channels, hidden_channels, 1),BasicConv(hidden_channels, channels, 3))def forward(self, x):return x+self.block(x)#---------------------------------------------------#
#   CSPdarknet的结构块
#   存在一个大残差边
#   这个大残差边绕过了很多的残差结构
#---------------------------------------------------#
class Resblock_body(nn.Module):def __init__(self, in_channels, out_channels, num_blocks, first):super(Resblock_body, self).__init__()self.downsample_conv = BasicConv(in_channels, out_channels, 3, stride=2)if first:self.split_conv0 = BasicConv(out_channels, out_channels, 1)self.split_conv1 = BasicConv(out_channels, out_channels, 1)  self.blocks_conv = nn.Sequential(Resblock(channels=out_channels, hidden_channels=out_channels//2),BasicConv(out_channels, out_channels, 1))self.concat_conv = BasicConv(out_channels*2, out_channels, 1)else:self.split_conv0 = BasicConv(out_channels, out_channels//2, 1)self.split_conv1 = BasicConv(out_channels, out_channels//2, 1)self.blocks_conv = nn.Sequential(*[Resblock(out_channels//2) for _ in range(num_blocks)],BasicConv(out_channels//2, out_channels//2, 1))self.concat_conv = BasicConv(out_channels, out_channels, 1)def forward(self, x):x = self.downsample_conv(x)x0 = self.split_conv0(x)x1 = self.split_conv1(x)x1 = self.blocks_conv(x1)x = torch.cat([x1, x0], dim=1)x = self.concat_conv(x)return xclass CSPDarkNet(nn.Module):def __init__(self, layers):super(CSPDarkNet, self).__init__()self.inplanes = 32self.conv1 = BasicConv(3, self.inplanes, kernel_size=3, stride=1)self.feature_channels = [64, 128, 256, 512, 1024]self.stages = nn.ModuleList([Resblock_body(self.inplanes, self.feature_channels[0], layers[0], first=True),Resblock_body(self.feature_channels[0], self.feature_channels[1], layers[1], first=False),Resblock_body(self.feature_channels[1], self.feature_channels[2], layers[2], first=False),Resblock_body(self.feature_channels[2], self.feature_channels[3], layers[3], first=False),Resblock_body(self.feature_channels[3], self.feature_channels[4], layers[4], first=False)])self.num_features = 1# 进行权值初始化for m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()def forward(self, x):x = self.conv1(x)x = self.stages[0](x)x = self.stages[1](x)out3 = self.stages[2](x)out4 = self.stages[3](out3)out5 = self.stages[4](out4)return out3, out4, out5def darknet53(pretrained, **kwargs):model = CSPDarkNet([1, 2, 8, 8, 4])if pretrained:if isinstance(pretrained, str):model.load_state_dict(torch.load(pretrained))else:raise Exception("darknet request a pretrained path. got [{}]".format(pretrained))return model

2、特征金字塔(以416x416为例)

在上图中,除了CSPDarknet53和YOLO Head的结构之外,其余都是特征金字塔的结构。

在特征金字塔部分,YOLO V4 结合两种改进:SPP 结构 和 PANet 结构。

1、 SSP结构加在CSPdarknet53的最后一个特征层的卷积,在CSP darknet53的最后一个特征层进行3次DarknetConv2D_BN_Leaky卷积后, 分别利用 四个 不同尺度的最大池化 进行处理,最大池化的池化核大小如图所示, 分别为 13 x 13 , 9 x 9, 5 x 5, 1 x 1(无需处理)。

#---------------------------------------------------#
#   SPP结构,利用不同大小的池化核进行池化
#   池化后堆叠
#---------------------------------------------------#
class SpatialPyramidPooling(nn.Module):def __init__(self, pool_sizes=[5, 9, 13]):super(SpatialPyramidPooling, self).__init__()self.maxpools = nn.ModuleList([nn.MaxPool2d(pool_size, 1, pool_size//2) for pool_size in pool_sizes])def forward(self, x):features = [maxpool(x) for maxpool in self.maxpools[::-1]]features = torch.cat(features + [x], dim=1)return features

2、 PANet是 2018 年的一种实例分割算法。
上图是原始的PANet结构,从 先上后下再上 的曲线,看出具有一个非常重要的特点就是特征的反复提取。在(a)里面是传统的特征金字塔结构,完成从下到上的特征提取后, 还需要实现 (b)部分从上到下的特征提取。在YOLOV4中, 最后三个 有效特征层上 使用了 PANet结构。

实现代码:

#---------------------------------------------------#
#   yolo_body
#---------------------------------------------------#
class YoloBody(nn.Module):def __init__(self, config):super(YoloBody, self).__init__()self.config = config#  backboneself.backbone = darknet53(None)self.conv1 = make_three_conv([512,1024],1024)self.SPP = SpatialPyramidPooling()self.conv2 = make_three_conv([512,1024],2048)self.upsample1 = Upsample(512,256)self.conv_for_P4 = conv2d(512,256,1)self.make_five_conv1 = make_five_conv([256, 512],512)self.upsample2 = Upsample(256,128)self.conv_for_P3 = conv2d(256,128,1)self.make_five_conv2 = make_five_conv([128, 256],256)# 3*(5+num_classes)=3*(5+20)=3*(4+1+20)=75final_out_filter2 = len(config["yolo"]["anchors"][2]) * (5 + config["yolo"]["classes"])self.yolo_head3 = yolo_head([256, final_out_filter2],128)self.down_sample1 = conv2d(128,256,3,stride=2)self.make_five_conv3 = make_five_conv([256, 512],512)# 3*(5+num_classes)=3*(5+20)=3*(4+1+20)=75final_out_filter1 = len(config["yolo"]["anchors"][1]) * (5 + config["yolo"]["classes"])self.yolo_head2 = yolo_head([512, final_out_filter1],256)self.down_sample2 = conv2d(256,512,3,stride=2)self.make_five_conv4 = make_five_conv([512, 1024],1024)# 3*(5+num_classes)=3*(5+20)=3*(4+1+20)=75final_out_filter0 = len(config["yolo"]["anchors"][0]) * (5 + config["yolo"]["classes"])self.yolo_head1 = yolo_head([1024, final_out_filter0],512)def forward(self, x):#  backbonex2, x1, x0 = self.backbone(x)P5 = self.conv1(x0)P5 = self.SPP(P5)P5 = self.conv2(P5)P5_upsample = self.upsample1(P5)P4 = self.conv_for_P4(x1)P4 = torch.cat([P4,P5_upsample],axis=1)P4 = self.make_five_conv1(P4)P4_upsample = self.upsample2(P4)P3 = self.conv_for_P3(x2)P3 = torch.cat([P3,P4_upsample],axis=1)P3 = self.make_five_conv2(P3)P3_downsample = self.down_sample1(P3)P4 = torch.cat([P3_downsample,P4],axis=1)P4 = self.make_five_conv3(P4)P4_downsample = self.down_sample2(P4)P5 = torch.cat([P4_downsample,P5],axis=1)P5 = self.make_five_conv4(P5)out2 = self.yolo_head3(P3)out1 = self.yolo_head2(P4)out0 = self.yolo_head1(P5)return out0, out1, out2

3、YoLoHead 利用获得到的 特征进行预测

1.  这一部分和YOLO V3 中的过程是一样的。在特征利用部分, YOLO V4提取 多特征层进行目标检测, 这三个被提取的特征层 分别位于 中、中下、底层。在input的shape是 608 x 608 时,这三个特征层的shape 是 (76,76,256),(38,38,512),(19,19,1024)。

2.  输出层的shape分别是(19,19,75),(38,38,75),(76,76,75)。关于75的解释,75 = 3*(20+1+4),3是因为yolo v4针对每一个特征层存在3个先验框, 20是因为这个图的结构是 基于 voc数据集,因为voc数据集的类是20种,4是因为对先验框调整需要4个参数,1是表示置信度。比如,要换成coco数据集(含有80个不同的类别),那么“75”应该改变成 “3*(80+4+1)=255

实现代码:

#---------------------------------------------------#
#   最后获得yolov4的输出
#---------------------------------------------------#
def yolo_head(filters_list, in_filters):m = nn.Sequential(conv2d(in_filters, filters_list[0], 3),nn.Conv2d(filters_list[0], filters_list[1], 1),)return m#---------------------------------------------------#
#   yolo_body
#---------------------------------------------------#
class YoloBody(nn.Module):def __init__(self, config):super(YoloBody, self).__init__()self.config = config#  backboneself.backbone = darknet53(None)self.conv1 = make_three_conv([512,1024],1024)self.SPP = SpatialPyramidPooling()self.conv2 = make_three_conv([512,1024],2048)self.upsample1 = Upsample(512,256)self.conv_for_P4 = conv2d(512,256,1)self.make_five_conv1 = make_five_conv([256, 512],512)self.upsample2 = Upsample(256,128)self.conv_for_P3 = conv2d(256,128,1)self.make_five_conv2 = make_five_conv([128, 256],256)# 3*(5+num_classes)=3*(5+20)=3*(4+1+20)=75final_out_filter2 = len(config["yolo"]["anchors"][2]) * (5 + config["yolo"]["classes"])self.yolo_head3 = yolo_head([256, final_out_filter2],128)self.down_sample1 = conv2d(128,256,3,stride=2)self.make_five_conv3 = make_five_conv([256, 512],512)# 3*(5+num_classes)=3*(5+20)=3*(4+1+20)=75final_out_filter1 = len(config["yolo"]["anchors"][1]) * (5 + config["yolo"]["classes"])self.yolo_head2 = yolo_head([512, final_out_filter1],256)self.down_sample2 = conv2d(256,512,3,stride=2)self.make_five_conv4 = make_five_conv([512, 1024],1024)# 3*(5+num_classes)=3*(5+20)=3*(4+1+20)=75final_out_filter0 = len(config["yolo"]["anchors"][0]) * (5 + config["yolo"]["classes"])self.yolo_head1 = yolo_head([1024, final_out_filter0],512)def forward(self, x):#  backbonex2, x1, x0 = self.backbone(x)P5 = self.conv1(x0)P5 = self.SPP(P5)P5 = self.conv2(P5)P5_upsample = self.upsample1(P5)P4 = self.conv_for_P4(x1)P4 = torch.cat([P4,P5_upsample],axis=1)P4 = self.make_five_conv1(P4)P4_upsample = self.upsample2(P4)P3 = self.conv_for_P3(x2)P3 = torch.cat([P3,P4_upsample],axis=1)P3 = self.make_five_conv2(P3)P3_downsample = self.down_sample1(P3)P4 = torch.cat([P3_downsample,P4],axis=1)P4 = self.make_five_conv3(P4)P4_downsample = self.down_sample2(P4)P5 = torch.cat([P4_downsample,P5],axis=1)P5 = self.make_five_conv4(P5)out2 = self.yolo_head3(P3)out1 = self.yolo_head2(P4)out0 = self.yolo_head1(P5)return out0, out1, out2

4、预测结果的解码

从第二步可以获得三个特征层的预测结果,shape分别为(N,19,19,75),(N,38,38,75),(N,76,76,75) 的数据,对应每个图被分成 19x 19、38 x 38、76 x 76 的网格上面 3个预测框的位置。但是,这个预测结果,同样的,不是最终预测框在图片上的位置,还需要经历解码才能完成。

在此,略提一些YOLO V3的预测原理,YOLO V3的 3个 特征层 分别将整幅图 分为 19x19、38 x 38、 76 x 76的网格,每个网络点负责某一个确定区域的检测。

在前已经提过,最后一个维度的“75”=(20+1+4)*3,这个(20+1+4)分别代表了 分类结果、 置信度、 (x_offset、y_offset、h和w)。

在YOLOV3的解码过程, 将每个网格点加上 它 对应的 x_offset 和 y_offset,加上之后对应的就是预测框的中心,然后结合 先验框 和 h 、w结合 计算出预测框的长和宽。 

最后,需要对预测进行得分排序和非最大抑制筛选(NMS),才能得到最后的预测结果。这个原理,几乎所有目标检测都通用。但是该项目的处理方式,是对每一个类别进行判别:

  • 取出每一个类的得分 大于 self.obj_threshold 的框和得分
  • 利用 取出的 框和得分 进行非极大抑制。

实现代码如下,当调用yolo_eval时,就会对每个特征层进行解码:

class DecodeBox(nn.Module):def __init__(self, anchors, num_classes, img_size):super(DecodeBox, self).__init__()self.anchors = anchorsself.num_anchors = len(anchors)self.num_classes = num_classesself.bbox_attrs = 5 + num_classesself.img_size = img_sizedef forward(self, input):# input为bs,3*(1+4+num_classes),13,13# 一共多少张图片batch_size = input.size(0)# 13,13input_height = input.size(2)input_width = input.size(3)# 计算步长# 每一个特征点对应原来的图片上多少个像素点# 如果特征层为13x13的话,一个特征点就对应原来的图片上的32个像素点# 416/13 = 32stride_h = self.img_size[1] / input_heightstride_w = self.img_size[0] / input_width# 把先验框的尺寸调整成特征层大小的形式# 计算出先验框在特征层上对应的宽高scaled_anchors = [(anchor_width / stride_w, anchor_height / stride_h) for anchor_width, anchor_height in self.anchors]# bs,3*(5+num_classes),13,13 -> bs,3,13,13,(5+num_classes)prediction = input.view(batch_size, self.num_anchors,self.bbox_attrs, input_height, input_width).permute(0, 1, 3, 4, 2).contiguous()# 先验框的中心位置的调整参数x = torch.sigmoid(prediction[..., 0])  y = torch.sigmoid(prediction[..., 1])# 先验框的宽高调整参数w = prediction[..., 2]  # Widthh = prediction[..., 3]  # Height# 获得置信度,是否有物体conf = torch.sigmoid(prediction[..., 4])# 种类置信度pred_cls = torch.sigmoid(prediction[..., 5:])  # Cls pred.FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensorLongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor# 生成网格,先验框中心,网格左上角 batch_size,3,13,13grid_x = torch.linspace(0, input_width - 1, input_width).repeat(input_width, 1).repeat(batch_size * self.num_anchors, 1, 1).view(x.shape).type(FloatTensor)grid_y = torch.linspace(0, input_height - 1, input_height).repeat(input_height, 1).t().repeat(batch_size * self.num_anchors, 1, 1).view(y.shape).type(FloatTensor)# 生成先验框的宽高anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))anchor_w = anchor_w.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(w.shape)anchor_h = anchor_h.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(h.shape)# 计算调整后的先验框中心与宽高pred_boxes = FloatTensor(prediction[..., :4].shape)pred_boxes[..., 0] = x.data + grid_xpred_boxes[..., 1] = y.data + grid_ypred_boxes[..., 2] = torch.exp(w.data) * anchor_wpred_boxes[..., 3] = torch.exp(h.data) * anchor_h# 用于将输出调整为相对于416x416的大小_scale = torch.Tensor([stride_w, stride_h] * 2).type(FloatTensor)output = torch.cat((pred_boxes.view(batch_size, -1, 4) * _scale,conf.view(batch_size, -1, 1), pred_cls.view(batch_size, -1, self.num_classes)), -1)return output.data

5、在原图上进行绘制

在第4步,可以获得预测框在原图上的真实位置,且都经过筛选,可以直接绘制在图片上。

YOLO V4的训练

1、改进训练技巧

a) Mosaic 数据增强

Mosaic利用了四张图片,根据论文说 其具备的巨大优点是: 丰富检测物体的背景, 并且在BN计算的时候会 一下子计算 四张图片的数据。实现思路如下:
1、每次读取四张照片

2、分别对四张照片进行反转、缩放、色域变化等,并且按照四个方向位置摆放好。

3、进行图片的组合和框的组合

具体步骤可以参考博客:https://blog.csdn.net/weixin_44791964/article/details/106214657

代码如下:

def rand(a=0, b=1):return np.random.rand()*(b-a) + adef merge_bboxes(bboxes, cutx, cuty):merge_bbox = []for i in range(len(bboxes)):for box in bboxes[i]:tmp_box = []x1,y1,x2,y2 = box[0], box[1], box[2], box[3]if i == 0:if y1 > cuty or x1 > cutx:continueif y2 >= cuty and y1 <= cuty:y2 = cutyif y2-y1 < 5:continueif x2 >= cutx and x1 <= cutx:x2 = cutxif x2-x1 < 5:continueif i == 1:if y2 < cuty or x1 > cutx:continueif y2 >= cuty and y1 <= cuty:y1 = cutyif y2-y1 < 5:continueif x2 >= cutx and x1 <= cutx:x2 = cutxif x2-x1 < 5:continueif i == 2:if y2 < cuty or x2 < cutx:continueif y2 >= cuty and y1 <= cuty:y1 = cutyif y2-y1 < 5:continueif x2 >= cutx and x1 <= cutx:x1 = cutxif x2-x1 < 5:continueif i == 3:if y1 > cuty or x2 < cutx:continueif y2 >= cuty and y1 <= cuty:y2 = cutyif y2-y1 < 5:continueif x2 >= cutx and x1 <= cutx:x1 = cutxif x2-x1 < 5:continuetmp_box.append(x1)tmp_box.append(y1)tmp_box.append(x2)tmp_box.append(y2)tmp_box.append(box[-1])merge_bbox.append(tmp_box)return merge_bboxdef get_random_data(annotation_line, input_shape, random=True, hue=.1, sat=1.5, val=1.5, proc_img=True):'''random preprocessing for real-time data augmentation'''h, w = input_shapemin_offset_x = 0.4min_offset_y = 0.4scale_low = 1-min(min_offset_x,min_offset_y)scale_high = scale_low+0.2image_datas = [] box_datas = []index = 0place_x = [0,0,int(w*min_offset_x),int(w*min_offset_x)]place_y = [0,int(h*min_offset_y),int(w*min_offset_y),0]for line in annotation_line:# 每一行进行分割line_content = line.split()# 打开图片image = Image.open(line_content[0])image = image.convert("RGB") # 图片的大小iw, ih = image.size# 保存框的位置box = np.array([np.array(list(map(int,box.split(',')))) for box in line_content[1:]])# image.save(str(index)+".jpg")# 是否翻转图片flip = rand()<.5if flip and len(box)>0:image = image.transpose(Image.FLIP_LEFT_RIGHT)box[:, [0,2]] = iw - box[:, [2,0]]# 对输入进来的图片进行缩放new_ar = w/hscale = rand(scale_low, scale_high)if new_ar < 1:nh = int(scale*h)nw = int(nh*new_ar)else:nw = int(scale*w)nh = int(nw/new_ar)image = image.resize((nw,nh), Image.BICUBIC)# 进行色域变换hue = rand(-hue, hue)sat = rand(1, sat) if rand()<.5 else 1/rand(1, sat)val = rand(1, val) if rand()<.5 else 1/rand(1, val)x = rgb_to_hsv(np.array(image)/255.)x[..., 0] += huex[..., 0][x[..., 0]>1] -= 1x[..., 0][x[..., 0]<0] += 1x[..., 1] *= satx[..., 2] *= valx[x>1] = 1x[x<0] = 0image = hsv_to_rgb(x)image = Image.fromarray((image*255).astype(np.uint8))# 将图片进行放置,分别对应四张分割图片的位置dx = place_x[index]dy = place_y[index]new_image = Image.new('RGB', (w,h), (128,128,128))new_image.paste(image, (dx, dy))image_data = np.array(new_image)/255# Image.fromarray((image_data*255).astype(np.uint8)).save(str(index)+"distort.jpg")index = index + 1box_data = []# 对box进行重新处理if len(box)>0:np.random.shuffle(box)box[:, [0,2]] = box[:, [0,2]]*nw/iw + dxbox[:, [1,3]] = box[:, [1,3]]*nh/ih + dybox[:, 0:2][box[:, 0:2]<0] = 0box[:, 2][box[:, 2]>w] = wbox[:, 3][box[:, 3]>h] = hbox_w = box[:, 2] - box[:, 0]box_h = box[:, 3] - box[:, 1]box = box[np.logical_and(box_w>1, box_h>1)]box_data = np.zeros((len(box),5))box_data[:len(box)] = boximage_datas.append(image_data)box_datas.append(box_data)img = Image.fromarray((image_data*255).astype(np.uint8))for j in range(len(box_data)):thickness = 3left, top, right, bottom  = box_data[j][0:4]draw = ImageDraw.Draw(img)for i in range(thickness):draw.rectangle([left + i, top + i, right - i, bottom - i],outline=(255,255,255))img.show()# 将图片分割,放在一起cutx = np.random.randint(int(w*min_offset_x), int(w*(1 - min_offset_x)))cuty = np.random.randint(int(h*min_offset_y), int(h*(1 - min_offset_y)))new_image = np.zeros([h,w,3])new_image[:cuty, :cutx, :] = image_datas[0][:cuty, :cutx, :]new_image[cuty:, :cutx, :] = image_datas[1][cuty:, :cutx, :]new_image[cuty:, cutx:, :] = image_datas[2][cuty:, cutx:, :]new_image[:cuty, cutx:, :] = image_datas[3][:cuty, cutx:, :]# 对框进行进一步的处理new_boxes = merge_bboxes(box_datas, cutx, cuty)return new_image, new_boxes

b)Label Smoothing平滑(搬运)

标签平滑具体公式如下:

new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

当label_smoothing=0.01时,公式变成如下:

new_onehot_labels = y * (1 - 0.01) + 0.01 / num_classes

例如,原始的onehot_labels为0、1时,在平滑后会变成0.005、0.995(二分类,num_classes=2),which means 对分类准确做一点惩罚,让模型不可以分类地太准确,太准确会容易导致过拟合问题。

c)CIOU

CIOU将目标与anchor之间的距离、重叠率、尺度以及惩罚项都考虑进去。

d)学习率余弦退火衰减

余弦退火算法:学习率会先上升再下降,上升的时候使用线性上升,下降的时候模拟cos函数下降,执行多次。

pytorch有直接的实现函数,可以直接调用

lr_scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=5, eta_min=1e-5)

2、loss组成

a) 计算loss所需参数

loss的计算过程,实际是y_pre和y_true之间的对比:y_pre 是一幅图像经过网络之后的输出,内部有三个特征层的内容;其需要解码才能够在图上作画; y_true 是一个真实图像中,他的每个 真实框 对应的 19 x 19 , 38 x 38 , 76 x 76 网格上的偏移位置、长宽与种类,需要编码才能和y_pred 的结构一致。

更深一步理解, y_pre 和 y_true 的shape是

(batch_size,19,19,3,25); (batch_size,38,38,3,25);(batch_size,76,76,3,25);

b) y_pre

网络最后输出的内容是三个有效特征层每个网格点对应的预测框及其种类,三个特征层分别对应着图片被resize成不同尺寸的网格后,每个网格点上 三个先验框 对应的位置、 置信度 及其种类。

对于输出的 y1 、y2、 y3而言,[...,:2]指的是相对于每个网格点的偏移量,[...,2:4]指的是宽和高,[...,4:5]指的是该框的置信度,[...,5:]指的是每个种类的预测概率。

c) y_true

y_true就是一个真实图像中,它的每个真实框对应的(19,19)、(38,38)、(76,76)网格上的偏移位置、长宽与种类。其仍需要编码才能与y_pred的结构一致

d)loss的计算过程

loss值需要对 三个特征层进行处理,而不是把 y_pre和y_true简单的相减。​​​​​​​

这里以最小的特征层 19 x 19为例。

  1. 利用y_true取出该特征层中 真实存在目标的点的位置 (m,19,19,3,1)及其 对应的种类(m,19,19,3,20)
  2. 将prediction的预测值输出进行处理,得到reshape后的预测值y_pre,shape为(m,19,19,3,25),还有解码之后的x、y、w、h
  3. 对于每一幅图,计算其中所有真实框与预测框的IOU,如果某些预测框和真实框的重合程度大于0.5,则忽略。
  4. 计算 CIOU 作为回归的loss,这里只计算 正样本 的回归loss。
  5. 计算置信度的loss,其有两部分构成,第一部分是实际上存在目标的,预测结果中置信度的值与1对比;第二部分是实际上不存在目标的,在第四步中得到其最大IOU的值与0对比。
  6. 计算预测种类的loss,其计算的是实际上存在目标的,预测类与真实类的差距。

代码如下:


#---------------------------------------------------#
#   平滑标签
#---------------------------------------------------#
def smooth_labels(y_true, label_smoothing,num_classes):return y_true * (1.0 - label_smoothing) + label_smoothing / num_classesdef box_ciou(b1, b2):"""输入为:----------b1: tensor, shape=(batch, feat_w, feat_h, anchor_num, 4), xywhb2: tensor, shape=(batch, feat_w, feat_h, anchor_num, 4), xywh返回为:-------ciou: tensor, shape=(batch, feat_w, feat_h, anchor_num, 1)"""# 求出预测框左上角右下角b1_xy = b1[..., :2]b1_wh = b1[..., 2:4]b1_wh_half = b1_wh/2.b1_mins = b1_xy - b1_wh_halfb1_maxes = b1_xy + b1_wh_half# 求出真实框左上角右下角b2_xy = b2[..., :2]b2_wh = b2[..., 2:4]b2_wh_half = b2_wh/2.b2_mins = b2_xy - b2_wh_halfb2_maxes = b2_xy + b2_wh_half# 求真实框和预测框所有的iouintersect_mins = torch.max(b1_mins, b2_mins)intersect_maxes = torch.min(b1_maxes, b2_maxes)intersect_wh = torch.max(intersect_maxes - intersect_mins, torch.zeros_like(intersect_maxes))intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]b1_area = b1_wh[..., 0] * b1_wh[..., 1]b2_area = b2_wh[..., 0] * b2_wh[..., 1]union_area = b1_area + b2_area - intersect_areaiou = intersect_area / (union_area + 1e-6)# 计算中心的差距center_distance = torch.sum(torch.pow((b1_xy - b2_xy), 2), axis=-1)# 找到包裹两个框的最小框的左上角和右下角enclose_mins = torch.min(b1_mins, b2_mins)enclose_maxes = torch.max(b1_maxes, b2_maxes)enclose_wh = torch.max(enclose_maxes - enclose_mins, torch.zeros_like(intersect_maxes))# 计算对角线距离enclose_diagonal = torch.sum(torch.pow(enclose_wh,2), axis=-1)ciou = iou - 1.0 * (center_distance) / (enclose_diagonal + 1e-7)v = (4 / (math.pi ** 2)) * torch.pow((torch.atan(b1_wh[..., 0]/b1_wh[..., 1]) - torch.atan(b2_wh[..., 0]/b2_wh[..., 1])), 2)alpha = v / (1.0 - iou + v)ciou = ciou - alpha * vreturn cioudef clip_by_tensor(t,t_min,t_max):t=t.float()result = (t >= t_min).float() * t + (t < t_min).float() * t_minresult = (result <= t_max).float() * result + (result > t_max).float() * t_maxreturn resultdef MSELoss(pred,target):return (pred-target)**2def BCELoss(pred,target):epsilon = 1e-7pred = clip_by_tensor(pred, epsilon, 1.0 - epsilon)output = -target * torch.log(pred) - (1.0 - target) * torch.log(1.0 - pred)return outputclass YOLOLoss(nn.Module):def __init__(self, anchors, num_classes, img_size, label_smooth=0, cuda=True):super(YOLOLoss, self).__init__()self.anchors = anchorsself.num_anchors = len(anchors)self.num_classes = num_classesself.bbox_attrs = 5 + num_classesself.img_size = img_sizeself.label_smooth = label_smoothself.ignore_threshold = 0.5self.lambda_conf = 1.0self.lambda_cls = 1.0self.lambda_loc = 1.0self.cuda = cudadef forward(self, input, targets=None):# input为bs,3*(5+num_classes),13,13# 一共多少张图片bs = input.size(0)# 特征层的高in_h = input.size(2)# 特征层的宽in_w = input.size(3)# 计算步长# 每一个特征点对应原来的图片上多少个像素点# 如果特征层为13x13的话,一个特征点就对应原来的图片上的32个像素点stride_h = self.img_size[1] / in_hstride_w = self.img_size[0] / in_w# 把先验框的尺寸调整成特征层大小的形式# 计算出先验框在特征层上对应的宽高scaled_anchors = [(a_w / stride_w, a_h / stride_h) for a_w, a_h in self.anchors]# bs,3*(5+num_classes),13,13 -> bs,3,13,13,(5+num_classes)prediction = input.view(bs, int(self.num_anchors/3),self.bbox_attrs, in_h, in_w).permute(0, 1, 3, 4, 2).contiguous()# 对prediction预测进行调整conf = torch.sigmoid(prediction[..., 4])  # Confpred_cls = torch.sigmoid(prediction[..., 5:])  # Cls pred.# 找到哪些先验框内部包含物体mask, noobj_mask, t_box, tconf, tcls, box_loss_scale_x, box_loss_scale_y = self.get_target(targets, scaled_anchors,in_w, in_h,self.ignore_threshold)noobj_mask, pred_boxes_for_ciou = self.get_ignore(prediction, targets, scaled_anchors, in_w, in_h, noobj_mask)if self.cuda:mask, noobj_mask = mask.cuda(), noobj_mask.cuda()box_loss_scale_x, box_loss_scale_y= box_loss_scale_x.cuda(), box_loss_scale_y.cuda()tconf, tcls = tconf.cuda(), tcls.cuda()pred_boxes_for_ciou = pred_boxes_for_ciou.cuda()t_box = t_box.cuda()box_loss_scale = 2-box_loss_scale_x*box_loss_scale_y#  losses.ciou = (1 - box_ciou( pred_boxes_for_ciou[mask.bool()], t_box[mask.bool()]))* box_loss_scale[mask.bool()]loss_loc = torch.sum(ciou / bs)loss_conf = torch.sum(BCELoss(conf, mask) * mask / bs) + \torch.sum(BCELoss(conf, mask) * noobj_mask / bs)# print(smooth_labels(tcls[mask == 1],self.label_smooth,self.num_classes))loss_cls = torch.sum(BCELoss(pred_cls[mask == 1], smooth_labels(tcls[mask == 1],self.label_smooth,self.num_classes))/bs)# print(loss_loc,loss_conf,loss_cls)loss = loss_conf * self.lambda_conf + loss_cls * self.lambda_cls + loss_loc * self.lambda_locreturn loss, loss_conf.item(), loss_cls.item(), loss_loc.item()def get_target(self, target, anchors, in_w, in_h, ignore_threshold):# 计算一共有多少张图片bs = len(target)# 获得先验框anchor_index = [[0,1,2],[3,4,5],[6,7,8]][[13,26,52].index(in_w)]subtract_index = [0,3,6][[13,26,52].index(in_w)]# 创建全是0或者全是1的阵列mask = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)noobj_mask = torch.ones(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)tx = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)ty = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)tw = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)th = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)t_box = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, 4, requires_grad=False)tconf = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)tcls = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, self.num_classes, requires_grad=False)box_loss_scale_x = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)box_loss_scale_y = torch.zeros(bs, int(self.num_anchors/3), in_h, in_w, requires_grad=False)for b in range(bs):for t in range(target[b].shape[0]):# 计算出在特征层上的点位gx = target[b][t, 0] * in_wgy = target[b][t, 1] * in_hgw = target[b][t, 2] * in_wgh = target[b][t, 3] * in_h# 计算出属于哪个网格gi = int(gx)gj = int(gy)# 计算真实框的位置gt_box = torch.FloatTensor(np.array([0, 0, gw, gh])).unsqueeze(0)# 计算出所有先验框的位置anchor_shapes = torch.FloatTensor(np.concatenate((np.zeros((self.num_anchors, 2)),np.array(anchors)), 1))# 计算重合程度anch_ious = bbox_iou(gt_box, anchor_shapes)# Find the best matching anchor boxbest_n = np.argmax(anch_ious)if best_n not in anchor_index:continue# Masksif (gj < in_h) and (gi < in_w):best_n = best_n - subtract_index# 判定哪些先验框内部真实的存在物体noobj_mask[b, best_n, gj, gi] = 0mask[b, best_n, gj, gi] = 1# 计算先验框中心调整参数tx[b, best_n, gj, gi] = gxty[b, best_n, gj, gi] = gy# 计算先验框宽高调整参数tw[b, best_n, gj, gi] = gwth[b, best_n, gj, gi] = gh# 用于获得xywh的比例box_loss_scale_x[b, best_n, gj, gi] = target[b][t, 2]box_loss_scale_y[b, best_n, gj, gi] = target[b][t, 3]# 物体置信度tconf[b, best_n, gj, gi] = 1# 种类tcls[b, best_n, gj, gi, int(target[b][t, 4])] = 1else:print('Step {0} out of bound'.format(b))print('gj: {0}, height: {1} | gi: {2}, width: {3}'.format(gj, in_h, gi, in_w))continuet_box[...,0] = txt_box[...,1] = tyt_box[...,2] = twt_box[...,3] = threturn mask, noobj_mask, t_box, tconf, tcls, box_loss_scale_x, box_loss_scale_ydef get_ignore(self,prediction,target,scaled_anchors,in_w, in_h,noobj_mask):bs = len(target)anchor_index = [[0,1,2],[3,4,5],[6,7,8]][[13,26,52].index(in_w)]scaled_anchors = np.array(scaled_anchors)[anchor_index]# 先验框的中心位置的调整参数x = torch.sigmoid(prediction[..., 0])  y = torch.sigmoid(prediction[..., 1])# 先验框的宽高调整参数w = prediction[..., 2]  # Widthh = prediction[..., 3]  # HeightFloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensorLongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor# 生成网格,先验框中心,网格左上角grid_x = torch.linspace(0, in_w - 1, in_w).repeat(in_w, 1).repeat(int(bs*self.num_anchors/3), 1, 1).view(x.shape).type(FloatTensor)grid_y = torch.linspace(0, in_h - 1, in_h).repeat(in_h, 1).t().repeat(int(bs*self.num_anchors/3), 1, 1).view(y.shape).type(FloatTensor)# 生成先验框的宽高anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))anchor_w = anchor_w.repeat(bs, 1).repeat(1, 1, in_h * in_w).view(w.shape)anchor_h = anchor_h.repeat(bs, 1).repeat(1, 1, in_h * in_w).view(h.shape)# 计算调整后的先验框中心与宽高pred_boxes = FloatTensor(prediction[..., :4].shape)pred_boxes[..., 0] = x + grid_xpred_boxes[..., 1] = y + grid_ypred_boxes[..., 2] = torch.exp(w) * anchor_wpred_boxes[..., 3] = torch.exp(h) * anchor_hfor i in range(bs):pred_boxes_for_ignore = pred_boxes[i]pred_boxes_for_ignore = pred_boxes_for_ignore.view(-1, 4)for t in range(target[i].shape[0]):gx = target[i][t, 0] * in_wgy = target[i][t, 1] * in_hgw = target[i][t, 2] * in_wgh = target[i][t, 3] * in_hgt_box = torch.FloatTensor(np.array([gx, gy, gw, gh])).unsqueeze(0).type(FloatTensor)anch_ious = bbox_iou(gt_box, pred_boxes_for_ignore, x1y1x2y2=False)anch_ious = anch_ious.view(pred_boxes[i].size()[:3])noobj_mask[i][anch_ious>self.ignore_threshold] = 0return noobj_mask, pred_boxes

总loss是三个loss的和,三个loss分别是:

  1. 实际存在的框,CIOU LOSS
  2. 实际存在的框,预测结果中置信度的值与1对比;实际不存在的框,预测结果中置信度的值与0对比,该部分去除被忽略的不包含目标的框
  3. 实际存在的框,种类预测结果与实际结果的对比

运行结果展示

目标检测系列--YOLO V4相关推荐

  1. 它来了,它来了,最强目标检测算法YOLO v4,它真的来了!!!

    YOLO 之父 Joseph Redmon 宣布退出计算机视觉领域,此事引发了极大的热议,其中一个悬念就是:我们还能等到 YOLO v4 吗? 现在,这一目标检测神器出现了新的接棒者!YOLO 的官方 ...

  2. 深度学习目标检测之 YOLO v4

    论文原文:https://arxiv.org/abs/2004.10934 代码 原版c++: https://github.com/AlexeyAB/darknet keras:https://gi ...

  3. 如何改进YOLOv3使其更好应用到小目标检测(比YOLO V4高出4%)

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 作者丨ChaucerG 来源丨集智书童 编辑丨极市平台 导读 针对微小目标的特征分散和层间语义差异的问 ...

  4. 目标检测之yolo系列

    YOLO v.s Faster R-CNN 1.统一网络:YOLO没有显示求取region proposal的过程.Faster R-CNN中尽管RPN与fast rcnn共享卷积层,但是在模型训练过 ...

  5. 目标检测模型 YOLO系列

    目标检测模型 YOLO系列 文章目录 目标检测模型 YOLO系列 YOLOv1 一.背景 二.YOLO模型 主要思想 模型结构 损失函数 三.优缺点 四.参考 YOLOv2与YOLO9000 YOLO ...

  6. python目标识别算法_深度学习目标检测系列:一文弄懂YOLO算法|附Python源码

    摘要: 本文是目标检测系列文章--YOLO算法,介绍其基本原理及实现细节,并用python实现,方便读者上手体验目标检测的乐趣. 在之前的文章中,介绍了计算机视觉领域中目标检测的相关方法--RCNN系 ...

  7. 目标检测:YOLO V1、YOLO V2、YOLO V3 算法

    日萌社 人工智能AI:Keras PyTorch MXNet TensorFlow PaddlePaddle 深度学习实战(不定时更新) yoloV3模型 目标检测:YOLO V1.YOLO V2.Y ...

  8. 目标检测之Yolo学习之路-Yolov1,Yolov2,Yolov3

    目标检测之Yolo学习之路-Yolov1,Yolov2,Yolov3 前言: 计算机视觉在我们一般业务场景中主要分为图像分类,目标检测,语义分割,实例分割.众所周知图像分类仅仅是将图像分出类别,常用于 ...

  9. 目标检测(降低误检测率及小目标检测系列笔记)

    深度学习中,为了提高模型的精度和泛化能力,往往着眼于两个方面:(1)使用更多的数据(2)使用更深更复杂的网络. ** 一.什么是负样本 ** 负样本是指不包含任务所要识别的目标的图像,也叫负图像(Ne ...

最新文章

  1. SpringBoot与SpringMVC的区别是什么?
  2. 从挂起到实现,你知道内核是如何实现的?
  3. ContrainedBox:设置尺寸
  4. python爬虫:使用BeautifulSoup进行查找
  5. javafx中的tree_JavaFX中的塔防(6)
  6. 数据库设计方法、规范与技巧
  7. php中常用的全局变量有,在PHP中如何使用全局变量的方法详解
  8. 混合云模式下 MaxCompute + Hadoop 混搭大数据架构实践
  9. 从网络营销辞职转行软件测试,100天的心酸拿到9K,过程都是自己在苦撑,只因我...
  10. 【C语言数据结构】双向循环链表
  11. c# Winform登陆界面设计,登陆用户不同权限设置
  12. leetcode347——前K个高频元素——java实现
  13. win7计算机硬盘加密码,win7系统怎么加密电脑硬盘 win7系统加密电脑硬盘的快速操作方法...
  14. miui修改Android,修改 MIUI「快捷开关」布局
  15. 纹理压缩简介 DXT PVR ETC
  16. i3 8100安装服务器系统,i38100安装win7纯净版的图文教程
  17. 手绘 | 我说话直,你别介意——我呸!
  18. 比较两条曲线的相似程度
  19. git Bash 命令行大全
  20. 云计算平台 python_云计算开发 python

热门文章

  1. 华为手机和谷歌原生系统虚拟按键遮挡布局问题
  2. c3po连接mysql带端口号_利用C3PO配置数据库连接池出现的问题
  3. Redisson 分布式锁简单应用
  4. LeetCode 326. Power of Three (算法,换底公式)
  5. avi通过文件读写方式实现剪切、拼接(不经过解码、编码)
  6. 知识科普 | “计算机病毒-数据安全的致命威胁”系列科普(一)计算机病毒的前世今生...
  7. 分省/市/县最低工资标准(2012-2021年)和 全国/省/市/县GDP数据(1949-2020年)
  8. Mark Twain — The Licensed Jester
  9. hive on spark 已测,完美搭建
  10. 解释什么是蓝绿发布?