2021SC@基于人工智能的多肽药物分析问题(八)

基于人工智能的多肽药物分析问题(八)

2021SC@SDUSC

1. pyrosetta模式预测过程代码分析-接上篇

1.1 template Embedding层

该层的主要作为用定义基于像素的注意力嵌入层

class Templ_emb(nn.Module):
    def __init__(self, d_t1d=3, d_t2d=10, d_templ=64, n_att_head=4, r_ff=4,
                 performer_opts=None, p_drop=0.1, max_len=5000):
        super(Templ_emb, self).__init__()
        self.proj = nn.Linear(d_t1d * 2 + d_t2d + 1, d_templ)
        self.pos = PositionalEncoding2D(d_templ, p_drop=p_drop)
        # attention along L
        enc_layer_L = AxialEncoderLayer(d_templ, d_templ * r_ff, n_att_head, p_drop=p_drop,
                                        performer_opts=performer_opts)
        self.encoder_L = Encoder(enc_layer_L, 1)

        self.norm = LayerNorm(d_templ)
        self.to_attn = nn.Linear(d_templ, 1)

我们可以看到,在Templ_emb模型的构造方法中,显示定义了一个Linear层,也就是用作神经网络的全连接层,该全连接层的输入大小设置为了 3 ∗ 2 + 10 + 1 = 17 3*2+10+1=17 32+10+1=17,输出大小设置为64;

之后又定义了二维坐标编码层

1.1.1 二维坐标编码层

class PositionalEncoding2D(nn.Module):
    def __init__(self, d_model, p_drop=0.1):
        super(PositionalEncoding2D, self).__init__()
        self.drop = nn.Dropout(p_drop, inplace=True)
        d_model_half = d_model // 2
        div_term = torch.exp(torch.arange(0., d_model_half, 2) *
                             -(math.log(10000.0) / d_model_half))
        self.register_buffer('div_term', div_term)

在该层中,先是定义了一个Dropout层,之后对数据进行如下处理:
d 2 = 32 ; d2 = 32; d2=32;

d i v T e r m = e [ 0 , 2 , 4 , ⋯   ] ∗ ( − ln ⁡ 10000 ÷ d 2 ) divTerm = e^{[0,2,4,\cdots]}*(-{\ln10000\div d2}) divTerm=e[0,2,4,](ln10000÷d2)

之后,调用register_buffer功能,将该部分此参数保存至model.state_dict() 。

1.1.2 坐标轴编码层

再之后定义了坐标轴编码层

class AxialEncoderLayer(nn.Module):
    def __init__(self, d_model, d_ff, heads, p_drop=0.1, performer_opts=None,
                 use_tied_row=False, use_tied_col=False, use_soft_row=False):
        super(AxialEncoderLayer, self).__init__()
        self.use_performer = performer_opts is not None
        self.use_tied_row = use_tied_row
        self.use_tied_col = use_tied_col
        self.use_soft_row = use_soft_row
        # multihead attention
        if use_tied_row:
            self.attn_L = TiedMultiheadAttention(d_model, heads, dropout=p_drop)
        elif use_soft_row:
            self.attn_L = SoftTiedMultiheadAttention(d_model, heads, dropout=p_drop)
        else:
            if self.use_performer:
                self.attn_L = SelfAttention(dim=d_model, heads=heads, dropout=p_drop, 
                                            generalized_attention=True, **performer_opts)
            else:
                self.attn_L = MultiheadAttention(d_model, heads, dropout=p_drop)
        if use_tied_col:
            self.attn_N = TiedMultiheadAttention(d_model, heads, dropout=p_drop)
        else:
            if self.use_performer:
                self.attn_N = SelfAttention(dim=d_model, heads=heads, dropout=p_drop, 
                                            generalized_attention=True, **performer_opts)
            else:
                self.attn_N = MultiheadAttention(d_model, heads, dropout=p_drop)

        # feedforward
        self.ff = FeedForwardLayer(d_model, d_ff, p_drop=p_drop)

        # normalization module
        self.norm1 = LayerNorm(d_model)
        self.norm2 = LayerNorm(d_model)
        self.norm3 = LayerNorm(d_model)
        self.dropout1 = nn.Dropout(p_drop, inplace=True)
        self.dropout2 = nn.Dropout(p_drop, inplace=True)
        self.dropout3 = nn.Dropout(p_drop, inplace=True)

这个模型定义的层挺多,该模型的构造方法共定义了4个bool类型的值,通过参数传递和参数的默认值情况我们可以看到,该模型在本次调用的时候performer_opts 接收的值为 “performer_L_opts”: {“nb_features”: 64}。

即use_performer变量值为True,所以接下来调用的是`self.attn_L = SelfAttention(dim=d_model, heads=heads, dropout=p_drop, generalized_attention=True, **performer_opts);

class SelfAttention(nn.Module):
    def __init__(self, dim, k_dim=None, heads = 8, local_heads = 0, local_window_size = 256, nb_features = None, feature_redraw_interval = 1000, generalized_attention = False, kernel_fn = nn.ReLU(inplace=True), qr_uniform_q = False, dropout = 0., no_projection = False):
        super().__init__()
        assert dim % heads == 0, 'dimension must be divisible by number of heads'
        dim_head = dim // heads
        inner_dim = dim_head * heads

        if k_dim == None:
            k_dim = dim

        self.fast_attention = FastAttention(dim_head, nb_features, generalized_attention = generalized_attention, kernel_fn = kernel_fn, qr_uniform_q = qr_uniform_q, no_projection = no_projection)

        self.heads = heads
        self.dim = dim

        self.to_query = nn.Linear(dim, inner_dim)
        self.to_key = nn.Linear(k_dim, inner_dim)
        self.to_value = nn.Linear(k_dim, inner_dim)
        self.to_out = nn.Linear(inner_dim, dim)
        self.dropout = nn.Dropout(dropout, inplace=True)

        self.feature_redraw_interval = feature_redraw_interval
        self.register_buffer("calls_since_last_redraw", torch.tensor(0))

        self.max_tokens = 2**16

该模型中首先判断 分配的处理器核心数量和数据的维度是否能够整除。dim传递进来的值为64,heads传递进来的值为4,即可以整除。接着又是一个自定义的FastAttention注意力机制模型,见下方:

1.1.3 注意力机制模型

class FastAttention(nn.Module):
    def __init__(self, dim_heads, nb_features = None, ortho_scaling = 0, generalized_attention = False, kernel_fn = nn.ReLU(inplace=True), qr_uniform_q = False, no_projection = False):
        super().__init__()
        nb_features = default(nb_features, int(dim_heads * math.log(dim_heads)))

        self.dim_heads = dim_heads
        self.nb_features = nb_features
        self.ortho_scaling = ortho_scaling

        if not no_projection:
            self.create_projection = partial(gaussian_orthogonal_random_matrix, nb_rows = self.nb_features, nb_columns = dim_heads, scaling = ortho_scaling, qr_uniform_q = qr_uniform_q)
            projection_matrix = self.create_projection()
            self.register_buffer('projection_matrix', projection_matrix)

        self.generalized_attention = generalized_attention
        self.kernel_fn = kernel_fn


        self.no_projection = no_projection

dim_heads传入的值为16,nb_features传入的值为64,这边根据上下文可求出no_projection的值为False,然后调用了partial方法。偏函数的作用:和装饰器一样,它可以扩展函数的功能,但又不完成等价于装饰器。通常应用的场景是当我们要频繁调用某个函数时,其中某些参数是已知的固定值,通常我们可以调用这个函数多次,但这样看上去似乎代码有些冗余,而偏函数的出现就是为了很少的解决这一个问题。即下文中多次调用了gaussian_orthogonal_random_matrix,给该方法的参数做了一次封装。

在该方法中,将nb_rows参数固定为64,nb_columns参数固定为16,scaling参数固定为0,qr_uniform_q参数固定为False,然后调用create_projection方法,将生成的矩阵数据调用register_buffer保存在模型变量中。

之后是变量的赋值,no_projection参数的作用是指明是否将输入经过softmax函数。

接着回到SelfAttention方法中,定义了4个全连接层,分别用于输出query,key,value,out等以及一个dropout层,随后将张量保存到模型变量中。

之后上下文又切换至AxialEncoderLayer方法

    if use_tied_col:
        self.attn_N = TiedMultiheadAttention(d_model, heads, dropout=p_drop)
    else:
        if self.use_performer:
            self.attn_N = SelfAttention(dim=d_model, heads=heads, dropout=p_drop, 
                                        generalized_attention=True, **performer_opts)
        else:
            self.attn_N = MultiheadAttention(d_model, heads, dropout=p_drop)

在这里,由于use_performer为True,所以此处再次调用的SelfAttention模型,这里构建的是另外一个维度的注意力机制模型,具体过程上述流程已经分析过,便不再赘述。

再之后定义了前馈层,两个全连接层+一个dropout层

class FeedForwardLayer(nn.Module):
    def __init__(self, d_model, d_ff, p_drop=0.1):
        super(FeedForwardLayer, self).__init__()
        self.linear1 = nn.Linear(d_model, d_ff)
        self.dropout = nn.Dropout(p_drop, inplace=True)
        self.linear2 = nn.Linear(d_ff, d_model)

正则化模块

        self.norm1 = LayerNorm(d_model)
        self.norm2 = LayerNorm(d_model)
        self.norm3 = LayerNorm(d_model)
        self.dropout1 = nn.Dropout(p_drop, inplace=True)
        self.dropout2 = nn.Dropout(p_drop, inplace=True)
        self.dropout3 = nn.Dropout(p_drop, inplace=True)

这部分定义了三个正则化层和三个dropout层,在LayerNorm层中,

这个函数可以理解为类型转换函数,将一个不可训练的类型 Tensor 转换成可以训练的类型 parameter 并将这个 parameter 绑定到这个

module 里面(net.parameter() 中就有这个绑定的 parameter,所以在参数优化的时候可以进行优化),所以经过类型转换这个变量就

变成了模型的一部分,成为了模型中根据训练可以改动的参数。使用这个函数的目的也是想让某些变量在学习的过程中不断的修改其

值以达到最优化。

class LayerNorm(nn.Module):
    def __init__(self, d_model, eps=1e-5):
        super(LayerNorm, self).__init__()
        self.a_2 = nn.Parameter(torch.ones(d_model))
        self.b_2 = nn.Parameter(torch.zeros(d_model))
        self.eps = eps

之后,上下文切换回Templ_emb方法,在递归调用完上面部分的模型后,又声明了Encoder层,将上述编码层的返回值传递给Encoder层。

class Encoder(nn.Module):
    def __init__(self, enc_layer, n_layer):
        super(Encoder, self).__init__()
        self.layers = _get_clones(enc_layer, n_layer)
        self.n_layer = n_layer
   
    def forward(self, src, return_att=False):
        output = src
        for layer in self.layers:
            output = layer(output, return_att=return_att)
        return output

在该模型中,调用了nn.ModuleList()方法,将模型全部深拷贝了一份,保存在成员变量中。

再之后,又定义了一个正则层,同样是将部分tensor转化为可训练的参数,

    def __init__(self, d_model, eps=1e-5):
        super(LayerNorm, self).__init__()
        self.a_2 = nn.Parameter(torch.ones(d_model))
        self.b_2 = nn.Parameter(torch.zeros(d_model))
        self.eps = eps

最后,以定义的一个全连接层结尾

2. 总结

由于代码定义的模型递归调用层数过多,每个类的参数也十分多,所以分析过程中也给出了调用部分模型时候的参数的值,便于理解和对代码中的分支部分进行判断。

猜你喜欢

转载自blog.csdn.net/weixin_45774350/article/details/121438143