PyTorch学习笔记:nn.LeakyReLU——LeakyReLU激活函数

功能:逐元素对数据应用如下函数公式进行激活
LeakyReLU(x)=max⁡(0,x)+α∗min⁡(0,x)\text{LeakyReLU}(x)=\max(0,x)+\alpha*\min(0,x) LeakyReLU(x)=max(0,x)+αmin(0,x)
或者
LeakyReLU(x)= { x, if x≥0α×x, otherwise\begin{aligned} \text{LeakyReLU}(x)= \left\{ \begin{matrix} x,\quad &if\quad x≥0 \\ \alpha\times x,\quad &\text{otherwise} \end{matrix} \right. \end{aligned} LeakyReLU(x)={x,α×x,ifx0otherwise
该函数相比于ReLU,保留了一些负轴的值,缓解了激活值过小而导致神经元参数无法更新的问题,其中 α\alphaα默认0.01。

函数图像:

输入:

  • negative_slope:控制负激活值的斜率,默认1e-2
  • inplace:是否改变输入数据,如果设置为True,则会直接修改输入数据;如果设置为False,则不对输入数据做修改

注意:

  • 输出数据与输入数据尺寸相同

代码案例

与ReLU做比较

import torch.nn as nnimport torchLeakyReLU = nn.LeakyReLU(negative_slope=5e-2)ReLU = nn.ReLU()x = torch.randn(10)value = ReLU(x)value_l = LeakyReLU(x)print(x)print(value)print(value_l)

输出

# 输入tensor([ 0.1820, -0.4248, -0.9135,0.1136, -1.0147, -0.5044,0.1361,0.0744, 1.3379, -1.1290])# ReLUtensor([0.1820, 0.0000, 0.0000, 0.1136, 0.0000, 0.0000, 0.1361, 0.0744, 1.3379,0.0000])# LeakyReLUtensor([ 0.1820, -0.0212, -0.0457,0.1136, -0.0507, -0.0252,0.1361,0.0744, 1.3379, -0.0564])

注:绘图程序

import torch.nn as nnimport torchimport numpy as npimport matplotlib.pyplot as pltLeakyReLU = nn.LeakyReLU(negative_slope=5e-2)x = torch.from_numpy(np.linspace(-3,3,100))value = LeakyReLU(x)plt.plot(x, value)plt.savefig('LeakyReLU.jpg')

官方文档

nn.LeakyReLU:https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html#torch.nn.LeakyReLU

初步完稿于:2022年2月16日