文章目录

    • 前言
    • 修改yaml文件(以yolov5s为例)
      • 只修改一处
      • 将Concat全部换成BiFPN_Add
      • 打印模型参数
    • 修改common.py
    • 修改yolo.py
    • 修改train.py
      • 1. 向优化器中添加BiFPN的权重参数
      • 2. 查看BiFPN_Add层参数更新情况
    • References

前言

在之前的这篇博客中,简要介绍了BiFPN的原理,以及YOLOv5作者如何结合BiFPN:【魔改YOLOv5-6.x(中)】:加入ACON激活函数、CBAM和CA注意力机制、加权双向特征金字塔BiFPN

本文将尝试进一步结合BiFPN,主要参考自:YOLOv5结合BiFPN

修改yaml文件(以yolov5s为例)

只修改一处

本文以yolov5s.yaml为例进行修改,修改模型配置文件时要注意以下几点:

  • 这里的yaml文件只修改了一处,也就是将19层的Concat换成了BiFPN_Add,要想修改其他层的Concat,可以类比进行修改
  • BiFPN_Add本质是add操作,不是concat操作,因此,BiFPN_Add的各个输入层要求大小完全一致(通道数、feature map大小等),因此,这里要修改之前的参数[-1, 13, 6],来满足这个要求:
    • -1层就是上一层的输出,原来上一层的输出channel数为256,这里改成512
    • 13层就是这里[-1, 3, C3, [512, False]], # 13
    • 这样修改后,BiFPN_Add各个输入大小都是[bs,256,40,40]
    • 最后BiFPN_Add后面的参数层设置为[256, 256]也就是输入输出channel数都是256
# YOLOv5by Ultralytics, GPL-3.0 license# Parametersnc: 80# number of classesdepth_multiple: 0.33# model depth multiplewidth_multiple: 0.50# layer channel multipleanchors:- [10,13, 16,30, 33,23]# P3/8- [30,61, 62,45, 59,119]# P4/16- [116,90, 156,198, 373,326]# P5/32# YOLOv5 v6.0 backbonebackbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]],# 0-P1/2 [-1, 1, Conv, [128, 3, 2]],# 1-P2/4 [-1, 3, C3, [128]], [-1, 1, Conv, [256, 3, 2]],# 3-P3/8 [-1, 6, C3, [256]], [-1, 1, Conv, [512, 3, 2]],# 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]],# 7-P5/32 [-1, 3, C3, [1024]], [-1, 1, SPPF, [1024, 5]],# 9]# YOLOv5 v6.0 BiFPN headhead:[[-1, 1, Conv, [512, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 6], 1, Concat, [1]],# cat backbone P4 [-1, 3, C3, [512, False]],# 13 [-1, 1, Conv, [256, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, Concat, [1]],# cat backbone P3 [-1, 3, C3, [256, False]],# 17 (P3/8-small) [-1, 1, Conv, [512, 3, 2]],# 为了BiFPN正确add,调整channel数 [[-1, 13, 6], 1, BiFPN_Add3, [256, 256]],# cat P4 <--- BiFPN change 注意v5s通道数是默认参数的一半 [-1, 3, C3, [512, False]],# 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, Concat, [1]],# cat head P5 [-1, 3, C3, [1024, False]],# 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]],# Detect(P3, P4, P5)]

将Concat全部换成BiFPN_Add

# YOLOv5by Ultralytics, GPL-3.0 license# Parametersnc: 80# number of classesdepth_multiple: 0.33# model depth multiplewidth_multiple: 0.50# layer channel multipleanchors:- [10,13, 16,30, 33,23]# P3/8- [30,61, 62,45, 59,119]# P4/16- [116,90, 156,198, 373,326]# P5/32# YOLOv5 v6.0 backbonebackbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]],# 0-P1/2 [-1, 1, Conv, [128, 3, 2]],# 1-P2/4 [-1, 3, C3, [128]], [-1, 1, Conv, [256, 3, 2]],# 3-P3/8 [-1, 6, C3, [256]], [-1, 1, Conv, [512, 3, 2]],# 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]],# 7-P5/32 [-1, 3, C3, [1024]], [-1, 1, SPPF, [1024, 5]],# 9]# YOLOv5 v6.0 BiFPN headhead:[[-1, 1, Conv, [512, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 6], 1, BiFPN_Add2, [256, 256]],# cat backbone P4 [-1, 3, C3, [512, False]],# 13 [-1, 1, Conv, [256, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, BiFPN_Add2, [128, 128]],# cat backbone P3 [-1, 3, C3, [256, False]],# 17 (P3/8-small) [-1, 1, Conv, [512, 3, 2]],# 为了BiFPN正确add,调整channel数 [[-1, 13, 6], 1, BiFPN_Add3, [256, 256]],# cat P4 <--- BiFPN change 注意v5s通道数是默认参数的一半 [-1, 3, C3, [512, False]],# 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, BiFPN_Add2, [256, 256]],# cat head P5 [-1, 3, C3, [1024, False]],# 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]],# Detect(P3, P4, P5)]

打印模型参数

可以参考这篇博客:【YOLOv5-6.x】模型参数及detect层输出测试(自用),进行模型配置文件测试并查看输出结果:

 fromnparamsmodulearguments 0-113520models.common.Conv[3, 32, 6, 2, 2]1-11 18560models.common.Conv[32, 64, 3, 2]2-11 18816models.common.C3[64, 64, 1] 3-11 73984models.common.Conv[64, 128, 3, 2] 4-12115712models.common.C3[128, 128, 2] 5-11295424models.common.Conv[128, 256, 3, 2]6-13625152models.common.C3[256, 256, 3] 7-11 1180672models.common.Conv[256, 512, 3, 2]8-11 1182720models.common.C3[512, 512, 1] 9-11656896models.common.SPPF[512, 512, 5]10-11131584models.common.Conv[512, 256, 1, 1] 11-11 0torch.nn.modules.upsampling.Upsample[None, 2, 'nearest'] 12 [-1, 6]1 65794models.common.BiFPN_Add2[256, 256] 13-11296448models.common.C3[256, 256, 1, False] 14-11 33024models.common.Conv[256, 128, 1, 1] 15-11 0torch.nn.modules.upsampling.Upsample[None, 2, 'nearest'] 16 [-1, 4]1 16514models.common.BiFPN_Add2[128, 128] 17-11 74496models.common.C3[128, 128, 1, False] 18-11295424models.common.Conv[128, 256, 3, 2] 19 [-1, 13, 6]1 65795models.common.BiFPN_Add3[256, 256] 20-11296448models.common.C3[256, 256, 1, False] 21-11590336models.common.Conv[256, 256, 3, 2] 22[-1, 10]1 65794models.common.BiFPN_Add2[256, 256] 23-11 1051648models.common.C3[256, 512, 1, False] 24[17, 20, 23]1229245models.yolo.Detect[80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]Model Summary: 278 layers, 7384006 parameters, 7384006 gradients, 17.2 GFLOPs

修改common.py

  • 复制粘贴一下代码:
# 结合BiFPN 设置可学习参数 学习不同分支的权重# 两个分支add操作class BiFPN_Add2(nn.Module):def __init__(self, c1, c2):super(BiFPN_Add2, self).__init__()# 设置可学习参数 nn.Parameter的作用是:将一个不可训练的类型Tensor转换成可以训练的类型parameter# 并且会向宿主模型注册该参数 成为其一部分 即model.parameters()会包含这个parameter# 从而在参数优化的时候可以自动一起优化self.w = nn.Parameter(torch.ones(2, dtype=torch.float32), requires_grad=True)self.epsilon = 0.0001self.conv = nn.Conv2d(c1, c2, kernel_size=1, stride=1, padding=0)self.silu = nn.SiLU()def forward(self, x):w = self.wweight = w / (torch.sum(w, dim=0) + self.epsilon)return self.conv(self.silu(weight[0] * x[0] + weight[1] * x[1]))# 三个分支add操作class BiFPN_Add3(nn.Module):def __init__(self, c1, c2):super(BiFPN_Add3, self).__init__()self.w = nn.Parameter(torch.ones(3, dtype=torch.float32), requires_grad=True)self.epsilon = 0.0001self.conv = nn.Conv2d(c1, c2, kernel_size=1, stride=1, padding=0)self.silu = nn.SiLU()def forward(self, x):w = self.wweight = w / (torch.sum(w, dim=0) + self.epsilon)# 将权重进行归一化# Fast normalized fusionreturn self.conv(self.silu(weight[0] * x[0] + weight[1] * x[1] + weight[2] * x[2]))

修改yolo.py

  • parse_model函数中找到elif m is Concat:语句,在其后面加上BiFPN_Add相关语句:
elif m is Concat:c2 = sum(ch[x] for x in f)# 添加bifpn_add结构elif m in [BiFPN_Add2, BiFPN_Add3]:c2 = max([ch[x] for x in f])

修改train.py

1. 向优化器中添加BiFPN的权重参数

  • BiFPN_Add2BiFPN_Add3函数中定义的w参数,加入g1
g0, g1, g2 = [], [], []# optimizer parameter groupsfor v in model.modules():# hasattr: 测试指定的对象是否具有给定的属性,返回一个布尔值if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):# biasg2.append(v.bias)# biasesif isinstance(v, nn.BatchNorm2d):# weight (no decay)g0.append(v.weight)elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):# weight (with decay)g1.append(v.weight)# BiFPN_Concatelif isinstance(v, BiFPN_Add2) and hasattr(v, 'w') and isinstance(v.w, nn.Parameter):g1.append(v.w)elif isinstance(v, BiFPN_Add3) and hasattr(v, 'w') and isinstance(v.w, nn.Parameter):g1.append(v.w)

2. 查看BiFPN_Add层参数更新情况

想要查看BiFPN_Add层的参数更新情况,可以参考这篇博客【Pytorch】查看模型某一层的参数数值(自用),直接定位到w参数,随着模型训练输出对应的值。

References

YOLOv5结合BiFPN

【论文笔记】EfficientDet(BiFPN)(2020)

nn.Module、nn.Sequential和torch.nn.parameter学习笔记