SpikingJelly笔记之自定义LIF&HH神经元


前言

在SpikingJelly中自定义LIF神经元电路模型与HH神经元电路模型。


一、LIF神经元电路

LIF神经元电路与SpikingJelly中内置的LIF模型略有区别,需要根据实际情况进行修改。

1、神经元方程

下图为一个RC并联的LIF神经元电路,膜电位在输入I的作用下充放电,达到阈值时被复位。

根据零输入和零状态响应,可得到膜电位充放电方程:

V z i = V 0 ∗ e − t τ \qquad V_{zi}=V_{0}*e^{-\frac{t}{\tau}} Vzi=V0eτt

V z s = I R ∗ ( 1 − e − t τ ) \qquad V_{zs}=IR*(1-e^{-\frac{t}{\tau}}) Vzs=IR(1eτt)

V = V z i + V z s \qquad V=V_{zi}+V_{zs} V=Vzi+Vzs

a = e − t τ , b = R ∗ ( 1 − e − t τ ) a=e^{-\frac{t}{\tau}},b=R*(1-e^{-\frac{t}{\tau}}) a=eτt,b=R(1eτt),可得:

V = a ∗ V 0 + b ∗ I \qquad V=a*V_{0}+b*I V=aV0+bI

2、神经元代码

根据上述方程对神经元的充电方程进行修改

class LIF_neuron(neuron.BaseNode):
    def __init__(self, v_threshold = 1.0, v_reset = 0.5,
                 R = 1e3, C = 1e-6, dt = 1e-3):
        super().__init__(v_threshold, v_reset)
        self.R = R
        self.C = C
        self.dt = dt
        self.tau = self.R * self.C
        self.a = math.exp(- self.dt / self.tau)
        self.b = (1 - self.a) * self.R
        self.v = 0.0
    def neuronal_charge(self, x: torch.Tensor):
        self.v = self.a * self.v + self.b * x

3、神经元响应

(1)引入库

import math
import torch
from torch import nn
from spikingjelly.activation_based import neuron, monitor, functional
from spikingjelly import visualizing

(2)施加激励,前50步为“1”,后50步为“0”,将脉冲幅值调整为1e-3

T = 100 # 时间步数
N = 1 # 样本数目
D = 1 # 输入维度/神经元数目
x_seq1 = torch.ones(50, N, D) * 1e-3
x_seq2 = torch.zeros(50, N, D)
x_seq = torch.cat((x_seq1,x_seq2), 0)
lif_neuron =LIF_neuron(v_threshold = 3.0, v_reset = 0.5,
                       R = 5e3, C = 10e-6, dt = 5e-3)
net = nn.Sequential(lif_neuron)
lif_neuron.step_mode = 'm'
lif_neuron.store_v_seq = True
print(net)

(3)监视器记录神经元状态

# 记录膜电位
monitor_v = monitor.AttributeMonitor('v_seq',
                                      pre_forward=False,
                                      net=net,
                                      instance=neuron.BaseNode)
# 记录输出
monitor_o = monitor.OutputMonitor(net=net,
                                  instance=neuron.BaseNode)
# 多步模式:逐层传播
with torch.no_grad(): # 计算时关闭自动求导
    net(x_seq)
functional.reset_net(net) # 重置神经元状态
# 可视化膜电位与输出
v_list = monitor_v.records[0].flatten()
s_list = monitor_o.records[0].flatten()
visualizing.plot_one_neuron_v_s(v_list.numpy(),
                                s_list.numpy(),
                                v_threshold=net[0].v_threshold,
                                v_reset=net[0].v_reset,
                                figsize=(6,6),
                                dpi=100)

(4)LIF神经元响应
LIF神经元

二、HH神经元电路

SpikingJelly中没有内置的HH神经元模型,在此根据神经元方程自定义。

1、神经元方程

下图为HH神经元等效电路,由Na、K、泄漏通道的电导和电位,以及膜电容组成。

(1)电流方程:

I m = I N a + I K + I L + C m d V d t \qquad I_{m}=I_{Na}+I_{K}+I_{L}+C_{m}\frac{dV}{dt} Im=INa+IK+IL+CmdtdV

I N a = g N a ∗ m 3 ∗ h ∗ ( V − E N a ) \qquad I_{Na}=g_{Na}*m^3*h*(V-E_{Na}) INa=gNam3h(VENa)

I K = g K ∗ n 4 ∗ ( V − E K ) \qquad I_{K}=g_{K}*n^4*(V-E_{K}) IK=gKn4(VEK)

I L = g L ∗ ( V − E L ) \qquad I_{L}=g_{L}*(V-E_{L}) IL=gL(VEL)

(2)m,n,h为离子通道变量:

d m d t = α m ∗ ( 1 − m ) − β m ∗ m \qquad \frac{dm}{dt}=\alpha_{m}*(1-m)-\beta_{m}*m dtdm=αm(1m)βmm

d n d t = α n ∗ ( 1 − n ) − β n ∗ n \qquad \frac{dn}{dt}=\alpha_{n}*(1-n)-\beta_{n}*n dtdn=αn(1n)βnn

d h d t = α h ∗ ( 1 − h ) − β h ∗ h \qquad \frac{dh}{dt}=\alpha_{h}*(1-h)-\beta_{h}*h dtdh=αh(1h)βhh

(3)α,β为离子通道转换率:

α m = 0.1 ∗ ( 25 − V ) e x p ( ( 25 − V ) / 10 ) − 1 \qquad \alpha_{m}=\frac{0.1*(25-V)}{exp((25-V)/10)-1} αm=exp((25V)/10)10.1(25V)

α n = 0.01 ∗ ( 10 − V ) e x p ( ( 10 − V ) / 10 ) − 1 \qquad \alpha_{n}=\frac{0.01*(10-V)}{exp((10-V)/10)-1} αn=exp((10V)/10)10.01(10V)

α h = 0.07 ∗ e x p ( − V / 20 ) \qquad \alpha_{h}=0.07*exp(-V/20) αh=0.07exp(V/20)

β m = 4.0 ∗ e x p ( − V / 18 ) \qquad \beta_{m}=4.0*exp(-V/18) βm=4.0exp(V/18)

β n = 0.125 ∗ e x p ( − V / 80 ) \qquad \beta_{n}=0.125*exp(-V/80) βn=0.125exp(V/80)

β h = 1 e x p ( ( 30 − V ) / 10 ) + 1 \qquad \beta_{h}=\frac{1}{exp((30-V)/10)+1} βh=exp((30V)/10)+11

2、神经元代码

(1)初始化时,用m,n,h的稳态值作为初值

m = α m α m + β m \qquad m=\frac{\alpha_{m}}{\alpha_{m}+\beta_{m}} m=αm+βmαm

n = α n α n + β n \qquad n=\frac{\alpha_{n}}{\alpha_{n}+\beta_{n}} n=αn+βnαn

h = α h α h + β h \qquad h=\frac{\alpha_{h}}{\alpha_{h}+\beta_{h}} h=αh+βhαh

(2)在updata_var()中更新离子通道变量m,n,h

(3)在neuronal_charge()中计算各通道电流,更新膜电位

(4)HH模型方程会自发进行放电与复位,因此需要对neuronal_fire()与neuronal_reset()进行修改

(5)修改neuronal_reset()时,即使改变self.v,也需进行一定的运算(如self.v = 1.0 * self.v),否则会导致膜电位监视错误

class HH_neuron(neuron.BaseNode):
    def __init__(self,
                 v_threshold = 80.0, v_reset = 0.0,
                 C = 1.0, dt = 1e-2,
                 g_Na = 120.0, g_K = 36.0, g_L = 0.3,
                 V_Na = 115.0, V_K = -12.0, V_L = 10.6):
        super().__init__(v_threshold, v_reset)
        self.C = C
        self.dt = dt
        self.g_Na = g_Na
        self.g_K = g_K
        self.g_L = g_L
        self.V_Na = V_Na
        self.V_K = V_K
        self.V_L = V_L
        self.v = 0.0
        self.fire = False
        alpha_m = 0.1*(25-self.v) / (math.exp((25-self.v)/10)-1)
        alpha_n = 0.01*(10-self.v) / (math.exp((10-self.v)/10)-1)
        alpha_h = 0.07*math.exp(-self.v/20)
        beta_m = 4.0*math.exp(-self.v/18)
        beta_n = 0.125*math.exp(-self.v/80)
        beta_h = 1/ (math.exp((30-self.v)/10)+1)
        self.m = alpha_m/(alpha_m+beta_m)
        self.n = alpha_n/(alpha_n+beta_n)
        self.h = alpha_h/(alpha_h+beta_h)
    def updata_var(self):
        alpha_m = 0.1*(25-self.v) / (math.exp((25-self.v)/10)-1)
        alpha_n = 0.01*(10-self.v) / (math.exp((10-self.v)/10)-1)
        alpha_h = 0.07*math.exp(-self.v/20)
        beta_m = 4.0*math.exp(-self.v/18)
        beta_n = 0.125*math.exp(-self.v/80)
        beta_h = 1 / (math.exp((30-self.v)/10)+1)
        self.m += (alpha_m*(1-self.m) - beta_m*self.m) * self.dt
        self.n += (alpha_n*(1-self.n) - beta_n*self.n) * self.dt
        self.h += (alpha_h*(1-self.h) - beta_h*self.h) * self.dt
    def neuronal_charge(self, x: torch.Tensor):
        self.updata_var()
        I_Na = (self.g_Na*self.m**3*self.h)*(self.v-self.V_Na)
        I_K = (self.g_K*self.n**4)*(self.v-self.V_K)
        I_leak = self.g_L*(self.v-self.V_L)
        self.v += (x-I_Na-I_K-I_leak)/self.C*self.dt
    def neuronal_fire(self):
        if self.fire == False:
            if (self.v > self.v_threshold):
                self.fire = True
            return self.surrogate_function(self.v - self.v_threshold)
        else:
            if (self.v < self.v_threshold):
                self.fire = False
            return self.surrogate_function(self.v - self.v - 1.0)
    def neuronal_reset(self, spike):
        self.v =  1.0 * self.v

3、神经元响应

(1)施加激励

N = 1 # 样本数目
D = 1 # 输入维度/神经元数目
x_seq1 = torch.zeros(2000, N, D)
x_seq2 = torch.ones(6000, N, D) * 10
x_seq3 = torch.zeros(2000, N, D)
x_seq = torch.cat((x_seq1,x_seq2,x_seq3), 0)
hh_neuron = HH_neuron(v_threshold = 80.0, v_reset = 0.0)
net = nn.Sequential(hh_neuron)
hh_neuron.step_mode = 'm'
hh_neuron.store_v_seq = True
print(net)

(2)其余部分与LIF神经元一致

(3)HH神经元响应

HH神经元


总结

自定义神经元主要是对模型的充电方程进行修改;

对于LIF模型,可直接求解微分方程得到膜电位更新的公式;

对于HH模型,则需要使用欧拉法进行逐步更新,仿真耗时更长。

猜你喜欢

转载自blog.csdn.net/qq_53715621/article/details/137369058