文章目录

  • 简介
  • 前期准备
  • 一、FGSM
    • 1.原理
    • 2.核心代码
    • 3.结果
  • 二、BIM/I-FGSM
    • 1.原理
    • 2.核心代码
    • 3.结果
  • 三、PGD
    • 1.原理
    • 2.核心代码
    • 3.结果
  • 四、JSMA
    • 1.原理
    • 2.核心代码
    • 3.结果
  • 五、C&W
    • 1.原理
    • 2.核心代码
    • 3.结果
  • 六、DeepFool
    • 1.原理
    • 2.核心代码
    • 3.结果
  • 总结

简介

随着深度学习的快速发展,其脆弱性也越来越被关注。本篇文章将介绍一些经典的对抗样本生成方法,并自己编写了相应代码实现。

前期准备

准备一个待攻击的网络,这里选取了一个用于花卉分类的卷积神经网络,可以对五种不同花卉(雏菊、蒲公英、玫瑰、太阳花、郁金香)进行分类,训练过程见CNN卷积神经网络:花卉分类(去除了此博客数据处理中transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))这一操作,否则对抗样本可视化会有一些问题),后文主要针对该网络进行攻击。

一、FGSM

1.原理

η = ε ∗ s i g n ( ∇xL ( θ , x , y ) )η=ε*sign(∇_x L(θ,x,y))η=εsign(xL(θ,x,y))
x^= x + η\hat x=x+ηx^=x+η
其中:
η :η:η生成的扰动
ε :\varepsilon:ε控制扰动大小的参数
x :x:x原始样本
x^:\hat x:x^生成的对抗样本
y :y:y目标标签
θ :θ:θ网络参数
L :L:L损失函数
s i g n :sign:sign符号函数(大于0时取1,小于0时取-1)

2.核心代码

def FGSM_ATTACK(img, epslion, grad):grad_sign = grad.sign()noise_fgsm = epslion * grad_signadv_fgsm = noise_fgsm + imgadv_fgsm = torch.clamp(adv_fgsm, 0, 1)return adv_fgsm, noise_fgsm

3.结果

原始图片:识别结果(玫瑰)

噪声:

对抗样本:识别结果(郁金香)

二、BIM/I-FGSM

1.原理

X N + 1 a d v=Clip ⁡ X , ε{ X N adv +αsign⁡ ( ∇XJ (θ, X N adv, y true))} \begin{array}{c} \mathrm{X}_{\mathrm{N}+1}^{\mathrm{adv}}=\operatorname{Clip}_{\mathrm{X}, \varepsilon}\left\{\mathrm{X}_{\mathrm{N}}^{\mathrm{adv}}+\alpha \operatorname{sign}\left(\nabla_{\mathrm{X}} \mathrm{J}\left(\theta, \mathrm{X}_{\mathrm{N}}^{\text {adv }}, \mathrm{y}_{\text {true }}\right)\right)\right\} \end{array}XN+1adv=ClipX,ε{XNadv+αsign(XJ(θ,XNadv,ytrue))}
其中:
X N + 1 a d v:\mathrm{X}_{\mathrm{N}+1}^{\mathrm{adv}}:XN+1adv第N+1次迭代得到的对抗样本
C l i p :Clip:Clip裁剪函数,将值裁剪至不大于 ε\varepsilonε的范围
α :\alpha:α控制扰动大小的参数
J :J:J损失函数
θ :\theta:θ网络参数
y t r u e:y_{true}:ytrue正确标签
s i g n :sign:sign符号函数(大于0时取1,小于0时取-1)

2.核心代码

重复迭代以下代码片段,BIM/I-FGSM可视为FGSM的迭代版本

def BIM_ATTACK(img, alpha, grad, epslion):grad_sign = grad.sign()noise_bim = alpha * grad_signadv_bim = noise_bim + imgadv_bim = torch.clamp(adv_bim, 0, epslion)return adv_bim.detach()

3.结果

原始图片:识别结果(玫瑰)

噪声:

对抗样本:识别结果(郁金香)

三、PGD

1.原理

X N + 1= ∏ X + S{ X N+αsign⁡ [ ∇XJ (θ, X N, y true)]} \mathrm{X}^{\mathrm{N}+1}=\prod_{\mathrm{X}+\mathrm{S}}\left\{\mathrm{X}^{\mathrm{N}}+\alpha \operatorname{sign}\left[\nabla_{\mathrm{X}} \mathrm{J}\left(\theta, \mathrm{X}^{\mathrm{N}}, \mathrm{y}_{\text {true }}\right)\right]\right\}XN+1=X+S{XN+αsign[XJ(θ,XN,ytrue)]}
其中:
X N + 1:\mathrm{X}^{\mathrm{N+1}}:XN+1第N+1次迭代得到的对抗样本
∏ X + S:\prod_{\mathrm{X}+\mathrm{S}}:X+S投影函数,将值投影至X+S的 ε\varepsilonε邻域范围内
X :X:X原始样本
S :S:S随机噪声
α :\alpha:α控制扰动大小的参数
J :J:J损失函数
θ :\theta:θ网络参数
y t r u e:y_{true}:ytrue正确标签
s i g n :sign:sign符号函数(大于0时取1,小于0时取-1)

2.核心代码

重复迭代以下片段,PGD与BIM/I-FGSM的区别主要在于:
1.PGD使用投影函数使得对抗样本与原始图像每个点的像素值差距不大于 ε\varepsilonε,而BIM/I-FGSM则是使用裁剪函数使得对抗样本每个点的像素值不大于 ε\varepsilonε
2.PGD会在对抗样本攻击前在样本中随机加入一些噪声S

def PGD_ATTACK(origin_img, img, alpha, grad, epslion):grad_sign = grad.sign()noise_pgd = alpha * grad_signadv_pgd = noise_pgd + img#投影至epslion邻域max = origin_img + epslionmin = origin_img - epslionmask1 = adv_pgd > maxmask1 = mask1.int()mask1_ = 1 - mask1adv_pgd = mask1 * max + mask1_ * adv_pgdmask2 = adv_pgd < minmask2 = mask2.int()mask2_ = 1 - mask2adv_pgd = mask2 * min + mask2_ * adv_pgdreturn adv_pgd.detach()

3.结果

原始图片:识别结果(玫瑰)

噪声:

对抗样本:识别结果(郁金香)

四、JSMA

1.原理

正向扰动:
S ( X , t ) [ i ] = { 0 if∂ Ft( X ) ∂ Xi 0 ( ∂ F t(X)∂ X i )∣ ∑ j≠t∂ F j(X)∂ X i ∣otherwise S(\mathbf{X}, t)[i]=\left\{\begin{array}{c} 0 \quad \text { if } \frac{\partial \mathbf{F}_{\mathbf{t}}(\mathbf{X})}{\partial \mathbf{X}_{i}}0 \\ \left(\frac{\partial \mathbf{F}_{\mathbf{t}}(\mathbf{X})}{\partial \mathbf{X}_{\mathbf{i}}}\right)\left|\sum_{j \neq t} \frac{\partial \mathbf{F}_{\mathbf{j}}(\mathbf{X})}{\partial \mathbf{X}_{\mathbf{i}}}\right| \quad \text { otherwise } \end{array}\right.S(X,t)[i]={0ifXiFt(X)<0orj=tXiFj(X)>0(XiFt(X)) j=tXiFj(X) otherwise
负向扰动:
S ( X , t ) [ i ] = { 0 if∂ Ft( X ) ∂ Xi > 0 or ∑ j ≠ t ∂ Fj( X ) ∂ Xi 0 \text { or } \sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}<0 \\ \left(\frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right)\left|\sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right| \quad \text { otherwise } \end{array}\right.S(X,t)[i]={0ifXiFt(X)>0orj=tXiFj(X)<0(XiFt(X)) j=tXiFj(X) otherwise
其中:
∂ Ft( X ) ∂ Xi :\frac{\partial \mathbf{F}_{\mathbf{t}}(\mathbf{X})}{\partial \mathbf{X}_{i}}:XiFt(X):目标类别 ttt的预测概率对输入的第 iii个维度的导数
∂ Fj( X ) ∂ Xi :\frac{\partial \mathbf{F}_{\mathbf{j}}(\mathbf{X})}{\partial \mathbf{X}_{i}}:XiFj(X):其他类别 jjj的预测概率对输入的第 iii个维度的导数
S ( X , t ) [ i ] :S(X,t)[i]:S(X,t)[i]:输入为 XXX,目标类别为 ttt时第 iii个像素点对结果的影响程度
正向扰动 :::当导数大于0时增大该位置像素值可以增大目标类别概率
负向扰动 :::当导数小于0时减小该位置像素值可以增大目标类别概率

但是由于满足条件的点很可能不存在,即通过以上公式计算出的结果可能所有点的影响程度都为0,所以采用以下公式每次寻找两个影响力最大的点并为其添加扰动,不断迭代直到扰动成功:
arg ⁡max ⁡ ( p1, p2)( ∑ i= p 1, p 2∂ F t(X)∂ X i )× ∣ ∑ i= p 1, p 2∑ j≠t∂ F j(X)∂ X i ∣ \arg \max _{\left(p_{1}, p_{2}\right)}\left(\sum_{i=p_{1}, p_{2}} \frac{\partial \mathbf{F}_{\mathbf{t}}(\mathbf{X})}{\partial \mathbf{X}_{\mathbf{i}}}\right) \times\left|\sum_{i=p_{1}, p_{2}} \sum_{j \neq t} \frac{\partial \mathbf{F}_{\mathbf{j}}(\mathbf{X})}{\partial \mathbf{X}_{\mathbf{i}}}\right|argmax(p1,p2)(i=p1,p2XiFt(X))× i=p1,p2j=tXiFj(X)

2.核心代码

重复迭代以下片段

def JSMA_ATTACK(img, model, theta, id_jsma, mask):adv_jsma = img.detach()mask = maskadv_jsma.requires_grad = Trueout_jsma = model(adv_jsma)num_j = out_jsma.size()[1] # j个输出num_i = np.prod(adv_jsma.shape[1:]) # i个输入jacobian = torch.zeros([num_j, num_i])for j in range(num_j):out_jsma[0][j].backward(retain_graph=True)jacobian[j] = adv_jsma.grad.squeeze().view(-1, num_i).clone()alpha = jacobian[id_jsma].reshape(1, num_i)beta = torch.zeros([1, num_i])for i in range(num_j):if i != id_jsma:beta += jacobian[i].reshape(1, num_i)alpha = alpha + alpha.Tbeta = beta + beta.Tmap = (alpha * torch.abs(beta)) * (1 - torch.eye(num_i, num_i))max_value, max_idx = torch.max(map.view(-1, num_i * num_i) * mask.view(-1, num_i * num_i), dim=1)# print(max_value)p = max_idx // num_iq = max_idx % num_i# print(p, q)mask[:, p] = 0mask[:, q] = 0mask[p, :] = 0mask[q, :] = 0 # 达到修改上限,不再改动shape = adv_jsma.size()adv_jsma = adv_jsma.view(-1, num_i).detach()mask = mask.view(-1, num_i * num_i)# mask[0, max_idx] = 0adv_jsma[0, p] += thetaadv_jsma[0, q] += thetaadv_jsma = adv_jsma.view(shape)mask = mask.view(num_i, num_i)adv_jsma = torch.clamp(adv_jsma, 0, 1)return adv_jsma, mask

3.结果

原始图片:识别结果(玫瑰)

噪声:

对抗样本:识别结果(雏菊)

五、C&W

1.原理

minimize ⁡ D ( x , x + δ ) suchthat C ( x + δ ) = t x + δ ∈ [ 0 , 1 ]n \begin{array}{l} \operatorname{minimize} \mathcal{D}(x, x+\delta) \\ \text {such that } C(x+\delta)=t \\ x+\delta \in[0,1]^{n} \end{array}minimizeD(x,x+δ)suchthatC(x+δ)=tx+δ[0,1]n
其中:
D :D:D两者之间的距离
δ :\delta:δ扰动
C :C:C检测器
t :t:t目标类别
由于约束 C ( x + δ ) = tC(x+δ)=tC(x+δ)=t是高度非线性的,现有算法难以解决,定义一个目标函数 fff,当且仅当 f ( x + δ ) ≤ 0f(x+δ)≤0f(x+δ)0时, C ( x + δ ) = tC(x+δ)=tC(x+δ)=t fff有许多可能的选择:
f1( x ′)= −loss ⁡ F , t( x ′)+ 1 f2( x ′)=(max ⁡ i ≠ t(F ( x ′)i)− F( x′)t)+f3( x ′)= softplus ⁡ ( max⁡i≠t( F( x′)i)−F ( x ′)t)− log ⁡ ( 2 ) f4( x ′)=( 0.5 − F( x′)t)+f5( x ′)= − log ⁡ (2F ( x ′)t−2)f6( x ′)=(max ⁡ i ≠ t(Z ( x ′)i)− Z( x′)t)+f7( x ′)= softplus ⁡ ( max⁡i≠t( Z( x′)i)−Z ( x ′)t)− log ⁡ ( 2 )\begin{array}{l} f_{1}\left(x^{\prime}\right)=-\operatorname{loss}_{F, t}\left(x^{\prime}\right)+1 \\ f_{2}\left(x^{\prime}\right)=\left(\max _{i \neq t}\left(F\left(x^{\prime}\right)_{i}\right)-F\left(x^{\prime}\right)_{t}\right)^{+} \\ f_{3}\left(x^{\prime}\right)=\operatorname{softplus}\left(\max _{i \neq t}\left(F\left(x^{\prime}\right)_{i}\right)-F\left(x^{\prime}\right)_{t}\right)-\log (2) \\ f_{4}\left(x^{\prime}\right)=\left(0.5-F\left(x^{\prime}\right)_{t}\right)^{+} \\ f_{5}\left(x^{\prime}\right)=-\log \left(2 F\left(x^{\prime}\right)_{t}-2\right) \\ f_{6}\left(x^{\prime}\right)=\left(\max _{i \neq t}\left(Z\left(x^{\prime}\right)_{i}\right)-Z\left(x^{\prime}\right)_{t}\right)^{+}\\ f_{7}\left(x^{\prime}\right)=\operatorname{softplus}\left(\max _{i \neq t}\left(Z\left(x^{\prime}\right)_{i}\right)-Z\left(x^{\prime}\right)_{t}\right)-\log (2) \\ \end{array}f1(x)=lossF,t(x)+1f2(x)=(maxi=t(F(x)i)F(x)t)+f3(x)=softplus(maxi=t(F(x)i)F(x)t)log(2)f4(x)=(0.5F(x)t)+f5(x)=log(2F(x)t2)f6(x)=(maxi=t(Z(x)i)Z(x)t)+f7(x)=softplus(maxi=t(Z(x)i)Z(x)t)log(2)
其中:
( e )+ (e)^{+}(e)+ m a x ( e , 0 )max(e,0)max(e,0)的缩写 。 s o f t p l u s ( x ) = l o g ( 1 + e x p ( x ) )softplus(x)=log(1+exp(x))softplus(x)=log(1+exp(x)) l o s s F , s( x )loss_{F,s}(x)lossF,s(x)是对于 xxx的交叉熵损失。 Z ( x )Z(x)Z(x)是未经过 s o f t m a xsoftmaxsoftmax的输出, F ( x ) = s o f t m a x ( Z ( x ) )F(x)=softmax(Z(x))F(x)=softmax(Z(x))。此时,函数可以写成:
minimize ⁡ D ( x , x + δ ) + c ⋅ f ( x + δ )\begin{array}{l} \operatorname{minimize} \mathcal{D}(x, x+\delta)+c·f(x+\delta) \end{array}minimizeD(x,x+δ)+cf(x+δ)
其中:
c :c:c权重参数

2.核心代码

f6 f_6f6的效果较好,所以选用 f6 f_6f6编写攻击算法

def CW_ATTACK(img, model, id_cw, c):noise = torch.zeros(img.size())noise.requires_grad = Trueoptimizer = optim.Adam([noise], lr=1)while(1):adv_cw = img + noiseout_cw = model(adv_cw)print(out_cw)if id_cw == np.argmax(out_cw.data.numpy(), axis=1)[0]:breakd = torch.sum(noise * noise)Z_t = out_cw[0][id_cw]Z_i = out_cw[0][0]relu = nn.ReLU(inplace=True)for i in range(out_cw.size()[1]):if out_cw[0][i] > Z_i and i != id_cw:Z_i = out_cw[0][i]loss_cw = c * relu(Z_i - Z_t) + dloss_cw.backward()optimizer.step()optimizer.zero_grad()return noise

3.结果

原始图片:识别结果(玫瑰)

噪声:

对抗样本:识别结果(雏菊)

六、DeepFool

1.原理

较复杂,详情可见此处

2.核心代码

def DeepFool_ATTACK(img, model):out_deepfool = model(img)num_k = out_deepfool.size()[1]# k个类别id_deepfool = np.argmax(out_deepfool.data.numpy(), axis=1)[0]while(1):img.requires_grad = Trueout_deepfool = model(img)if id_deepfool != np.argmax(out_deepfool.data.numpy(), axis=1)[0]:breakout_deepfool[0, id_deepfool].backward(retain_graph = True)origin_grad = img.grad.data.clone()img.grad.zero_()f_k = []w_k = []l = []for i in range(num_k):if i != id_deepfool:out_deepfool[0, i].backward(retain_graph = True)other_grad = img.grad.data.clone()img.grad.zero_()f_k.append(out_deepfool[0, i] - out_deepfool[0, id_deepfool])w_k.append(other_grad - origin_grad * img)l.append(torch.abs(out_deepfool[0, i] - out_deepfool[0, id_deepfool]) / torch.norm(other_grad - origin_grad * img))min_l = l[0]index = 0for i in range(1, len(l)):if l[i] < min_l:min_l = l[i]index = iimg = (img + min_l * w_k[index]/torch.norm(w_k[index])).detach()return img

3.结果

原始图片:识别结果(玫瑰)

噪声:

对抗样本:识别结果(郁金香)

总结

本文实现了部分对抗样本生成算法,代码如下所示,项目完整代码链接处可供下载(无需积分)。
注:
1.所有代码均为本作者学习时编写,难以保证完全准确,可供大家参考指正,读者可进一步参考官方代码,如对抗样本库:foolbox
2.jsma代码所占内存较大,耗时较长,建议放于内存较大的服务器上运行。

import torchimport torch.nn as nnimport numpy as npimport torch.optim as optimimport torchvision.transforms as transformsimport cv2class Model(nn.Module):def __init__(self):super(Model, self).__init__()self.cnn = nn.Sequential(nn.Conv2d(3, 64, 3, 1, 1),# 输入3*128*128,输出64*128*128nn.ReLU(),nn.MaxPool2d(2),# 输出64*64*64nn.Conv2d(64, 128, 3, 1, 1),# 输出128*64*64nn.ReLU(),nn.MaxPool2d(2),# 输出128*32*32nn.Conv2d(128, 256, 3, 1, 1),# 输出256*32*32nn.ReLU(),nn.MaxPool2d(2),# 输出256*16*16nn.Conv2d(256, 512, 3, 1, 1),# 输出512*16*16nn.ReLU(),nn.MaxPool2d(2),# 输出512*8*8nn.Conv2d(512, 512, 3, 1, 1),# 输出512*8*8nn.ReLU(),nn.MaxPool2d(2),# 输出512*4*4)self.fc = nn.Sequential(nn.Linear(512 * 4 * 4, 1024),# 输入8192,输出1024nn.ReLU(),nn.Linear(1024, 512),# 输入1024,输出512nn.ReLU(),nn.Linear(512, 5)# 输入512,输出5)def forward(self, x):out = self.cnn(x)out = out.view(out.size()[0], -1)# batch_size*8192return self.fc(out)def FGSM_ATTACK(img, epslion, grad):grad_sign = grad.sign()noise_fgsm = epslion * grad_signadv_fgsm = noise_fgsm + imgadv_fgsm = torch.clamp(adv_fgsm, 0, 1)return adv_fgsm, noise_fgsmdef BIM_ATTACK(img, alpha, grad, epslion):grad_sign = grad.sign()noise_bim = alpha * grad_signadv_bim = noise_bim + imgadv_bim = torch.clamp(adv_bim, 0, epslion)return adv_bim.detach()def PGD_ATTACK(origin_img, img, alpha, grad, epslion):grad_sign = grad.sign()noise_pgd = alpha * grad_signadv_pgd = noise_pgd + img#投影至epslion邻域max = origin_img + epslionmin = origin_img - epslionmask1 = adv_pgd > maxmask1 = mask1.int()mask1_ = 1 - mask1adv_pgd = mask1 * max + mask1_ * adv_pgdmask2 = adv_pgd < minmask2 = mask2.int()mask2_ = 1 - mask2adv_pgd = mask2 * min + mask2_ * adv_pgdreturn adv_pgd.detach()def JSMA_ATTACK(img, model, theta, id_jsma, mask):adv_jsma = img.detach()mask = maskadv_jsma.requires_grad = Trueout_jsma = model(adv_jsma)num_j = out_jsma.size()[1] # j个输出num_i = np.prod(adv_jsma.shape[1:]) # i个输入jacobian = torch.zeros([num_j, num_i])for j in range(num_j):out_jsma[0][j].backward(retain_graph=True)jacobian[j] = adv_jsma.grad.squeeze().view(-1, num_i).clone()alpha = jacobian[id_jsma].reshape(1, num_i)beta = torch.zeros([1, num_i])for i in range(num_j):if i != id_jsma:beta += jacobian[i].reshape(1, num_i)alpha = alpha + alpha.Tbeta = beta + beta.Tmap = (alpha * torch.abs(beta)) * (1 - torch.eye(num_i, num_i))max_value, max_idx = torch.max(map.view(-1, num_i * num_i) * mask.view(-1, num_i * num_i), dim=1)# print(max_value)p = max_idx // num_iq = max_idx % num_i# print(p, q)mask[:, p] = 0mask[:, q] = 0mask[p, :] = 0mask[q, :] = 0 # 达到修改上限,不再改动shape = adv_jsma.size()adv_jsma = adv_jsma.view(-1, num_i).detach()mask = mask.view(-1, num_i * num_i)# mask[0, max_idx] = 0adv_jsma[0, p] += thetaadv_jsma[0, q] += thetaadv_jsma = adv_jsma.view(shape)mask = mask.view(num_i, num_i)adv_jsma = torch.clamp(adv_jsma, 0, 1)return adv_jsma, maskdef CW_ATTACK(img, model, id_cw, c):noise = torch.zeros(img.size())noise.requires_grad = Trueoptimizer = optim.Adam([noise], lr=1)while(1):adv_cw = img + noiseout_cw = model(adv_cw)print(out_cw)if id_cw == np.argmax(out_cw.data.numpy(), axis=1)[0]:breakd = torch.sum(noise * noise)Z_t = out_cw[0][id_cw]Z_i = out_cw[0][0]relu = nn.ReLU(inplace=True)for i in range(out_cw.size()[1]):if out_cw[0][i] > Z_i and i != id_cw:Z_i = out_cw[0][i]loss_cw = c * relu(Z_i - Z_t) + dloss_cw.backward()optimizer.step()optimizer.zero_grad()return noisedef DeepFool_ATTACK(img, model):out_deepfool = model(img)num_k = out_deepfool.size()[1]# k个类别id_deepfool = np.argmax(out_deepfool.data.numpy(), axis=1)[0]while(1):img.requires_grad = Trueout_deepfool = model(img)if id_deepfool != np.argmax(out_deepfool.data.numpy(), axis=1)[0]:breakout_deepfool[0, id_deepfool].backward(retain_graph = True)origin_grad = img.grad.data.clone()img.grad.zero_()f_k = []w_k = []l = []for i in range(num_k):if i != id_deepfool:out_deepfool[0, i].backward(retain_graph = True)other_grad = img.grad.data.clone()img.grad.zero_()f_k.append(out_deepfool[0, i] - out_deepfool[0, id_deepfool])w_k.append(other_grad - origin_grad * img)l.append(torch.abs(out_deepfool[0, i] - out_deepfool[0, id_deepfool]) / torch.norm(other_grad - origin_grad * img))min_l = l[0]index = 0for i in range(1, len(l)):if l[i] < min_l:min_l = l[i]index = iimg = (img + min_l * w_k[index]/torch.norm(w_k[index])).detach()return imgif __name__ == '__main__':model = Model()model_data = torch.load('./cnn.pt')model.load_state_dict(model_data)model.eval()label = ['daisy', 'dandelion', 'rose', 'sunflower', 'tulip']img = cv2.imread('./rose.jpg')img = cv2.resize(img, (128, 128))tool1 = transforms.Compose([transforms.ToTensor(),# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),])img = tool1(img)img = img.unsqueeze(0)img.requires_grad = Truebefore_attack = model(img)id = np.argmax(before_attack.data.numpy(), axis=1)[0]print('before_attack:{}'.format(label[id]))mode = input('请输入攻击模式(FGSM,BIM,PGD,JSMA,C&W,DeepFool)')if mode =='FGSM':#FGSM_ATTACKloss = nn.CrossEntropyLoss()loss_fgsm = loss(before_attack, torch.tensor([id]))loss_fgsm.backward()grad_fgsm = img.grad.dataadv_fgsm, noise_fgsm = FGSM_ATTACK(img=img, epslion=0.1, grad=grad_fgsm)model.zero_grad()after_fgsm_attack = model(adv_fgsm)id_fgsm = np.argmax(after_fgsm_attack.data.numpy(), axis=1)[0]tool2 = transforms.ToPILImage()adv_fgsm = np.asarray(tool2(adv_fgsm[0]))noise_fgsm = np.asarray(tool2(noise_fgsm[0]))cv2.imwrite('./adv_fgsm.jpg', adv_fgsm)cv2.imwrite('./noise_fgsm.jpg', noise_fgsm)print('after_fgsm_attack:{}'.format(label[id_fgsm]))elif mode == 'BIM':#BIM/I-FGSM_ATTACKadv_bim = imgnoise_bim = 0out_bim = model(adv_bim)id_bim = np.argmax(out_bim.data.numpy(), axis=1)[0]for i in range(0, 10):adv_bim.requires_grad = Trueout_bim = model(adv_bim)loss = nn.CrossEntropyLoss()loss_bim = loss(out_bim, torch.tensor([id_bim]))loss_bim.backward()grad_bim = adv_bim.grad.dataadv_bim = BIM_ATTACK(img=adv_bim, alpha=0.01, grad=grad_bim, epslion=1)model.zero_grad()after_bim_attack = model(adv_bim)id_bim = np.argmax(after_bim_attack.data.numpy(), axis=1)[0]noise_bim = adv_bim - imgtool2 = transforms.ToPILImage()adv_bim = np.asarray(tool2(adv_bim[0]))noise_bim = np.asarray(tool2(noise_bim[0]))cv2.imwrite('./adv_bim.jpg', adv_bim)cv2.imwrite('./noise_bim.jpg', noise_bim)print('after_bim_attack:{}'.format(label[id_bim]))elif mode == 'PGD':# PGD_ATTACKadv_pgd = imgnoise_pgd = 0out_pgd = model(adv_pgd)id_pgd = np.argmax(out_pgd.data.numpy(), axis=1)[0]for i in range(0, 10):adv_pgd.requires_grad = Trueout_pgd = model(adv_pgd)loss = nn.CrossEntropyLoss()loss_pgd = loss(out_pgd, torch.tensor([id_pgd]))loss_pgd.backward()grad_pgd = adv_pgd.grad.dataadv_pgd = PGD_ATTACK(origin_img=img, img=adv_pgd, alpha=0.01, grad=grad_pgd, epslion=0.05)model.zero_grad()after_pgd_attack = model(adv_pgd)id_pgd = np.argmax(after_pgd_attack.data.numpy(), axis=1)[0]noise_pgd = adv_pgd - imgtool2 = transforms.ToPILImage()adv_pgd = np.asarray(tool2(adv_pgd[0]))noise_pgd = np.asarray(tool2(noise_pgd[0]))cv2.imwrite('./adv_pgd.jpg', adv_pgd)cv2.imwrite('./noise_pgd.jpg', noise_pgd)print('after_pgd_attack:{}'.format(label[id_pgd]))elif mode == 'JSMA':num = 0adv_jsma = imgmask = torch.ones(np.prod(adv_jsma.shape[1:]), np.prod(adv_jsma.shape[1:]))while(1):num += 1print(num)adv_jsma, mask = JSMA_ATTACK(img=adv_jsma, model=model, theta=1.0, id_jsma=0, mask=mask)after_jsma_attack = model(adv_jsma)print(after_jsma_attack)id_jsma = np.argmax(after_jsma_attack.data.numpy(), axis=1)[0]if id_jsma != id:breaknoise_jsma = adv_jsma - imgtool2 = transforms.ToPILImage()adv_jsma = np.asarray(tool2(adv_jsma[0]))noise_jsma = np.asarray(tool2(noise_jsma[0]))cv2.imwrite('./adv_jsma.jpg', adv_jsma)cv2.imwrite('./noise_jsma.jpg', noise_jsma)print('after_jsma_attack:{}'.format(label[id_jsma]))elif mode == 'C&W':noise_cw = CW_ATTACK(img=img, model=model, id_cw=0, c=1)adv_cw = img + noise_cwafter_cw_attack = model(adv_cw)id_cw = np.argmax(after_cw_attack.data.numpy(), axis=1)[0]tool2 = transforms.ToPILImage()adv_cw = np.asarray(tool2(adv_cw[0]))noise_cw = np.asarray(tool2(noise_cw[0]))cv2.imwrite('./adv_cw.jpg', adv_cw)cv2.imwrite('./noise_cw.jpg', noise_cw)print('after_cw_attack:{}'.format(label[id_cw]))elif mode == 'DeepFool':adv_deepfool = DeepFool_ATTACK(img=img, model=model)noise_deepfool = adv_deepfool - imgafter_deepfool_attack = model(adv_deepfool)id_deepfool = np.argmax(after_deepfool_attack.data.numpy(), axis=1)[0]tool2 = transforms.ToPILImage()adv_deepfool = np.asarray(tool2(adv_deepfool[0]))noise_deepfool = np.asarray(tool2(noise_deepfool[0]))cv2.imwrite('./adv_deepdool.jpg', adv_deepfool)cv2.imwrite('./noise_deepdool.jpg', noise_deepfool)print('after_deepdool_attack:{}'.format(label[id_deepfool]))