ogbn_arxiv_GCN_res
This is an improvement of baesline on the ogbn-arxiv dataset.
我的代码:https://github.com/ytchx1999/ogbn_arxiv_GCN_res
ogbn-arxiv
The framework is shown in the figure.
Improvement Strategy:
- add skip-connection inspired by DeeperGCN
按比例把前一层的结果加到当前层的结果上,既可以加速模型收敛速度,又可以缓解过平滑问题。 - add initial residual connection inspired by GCNII
主要是借鉴了初始剩余连接的思路,按照比例在后面层的结果中加上了 X ( 0 ) X^{(0)} X(0),可以缓解模型过平滑问题,不并且能够小幅提升模型的acc。 - add jumping knowledge inspired by JKNet
类似JKNet,先保存每一层的结果,然后经过softmax后进行sum,得到最终的节点特征表示,可以有效缓解过平滑问题。
Experiment Setup:
The model is 8 layers, and runs 500 epochs.
python ogbn_gcn_res.py
Detailed Hyperparameter:
num_layers = 8
hidden_dim = 128
dropout = 0.5
lr = 0.01
runs = 10
epochs = 500
alpha = 0.2
beta = 0.5
Result:
All runs:
Highest Train: 77.94 ± 0.50
Highest Valid: 73.69 ± 0.21
Final Train: 77.72 ± 0.46
Final Test: 72.62 ± 0.37
Model | Test Accuracy | Valid Accuracy | Parameters | Hardware |
---|---|---|---|---|
GCN_res | 0.7262 ± 0.0037 | 0.7369 ± 0.0021 | 155824 | Tesla T4(16GB) |
已经向OGB排行榜提交了申请,OGB团队正在验证我的model,看看能不能选上吧。
2021.2.22更新
OGB团队接受了我的代码,暂时位列19,开森。
https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv
已经超过了GCN、GraphSAGE等baseline的表现,算是毕设的目标初步达成?
欢迎各位大佬批评指正!
2021.2.25更新
后面发现,我的model标准差std稍微有点大,所以采用了FLAG方法进行对抗性数据增广,以稳定模型并小幅提升准确率。
GCN_res-FLAG
This is an improvement of the (GCN_res + 8 layers) model, using the FLAG method.
我的代码:https://github.com/ytchx1999/GCN_res-FLAG

ogbn-arxiv
-
Check out the model:(GCN_res + 8 layers)
-
Check out the FLAG method:FLAG
Improvement Strategy:
- add FLAG method
Environmental Requirements
- pytorch == 1.7.1
- pytorch_geometric == 1.6.3
- ogb == 1.2.4
Experiment Setup:
The model is 8 layers, 10 runs which conclude 500 epochs.
python ogbn_gcn_res_flag.py
Detailed Hyperparameter:
num_layers = 8
hidden_dim = 128
dropout = 0.5
lr = 0.01
runs = 10
epochs = 500
alpha = 0.2
beta = 0.7
Result:
All runs:
Highest Train: 78.61 ± 0.49
Highest Valid: 73.89 ± 0.12
Final Train: 78.44 ± 0.46
Final Test: 72.76 ± 0.24
Model | Test Accuracy | Valid Accuracy | Parameters | Hardware |
---|---|---|---|---|
GCN_res + FLAG | 0.7276 ± 0.0024 | 0.7389 ± 0.0012 | 155824 | Tesla T4(16GB) |
可以发现,模型准确率有了小幅度的提升,并且标准差也降下来了,算是达到了预期的目标。
向OGB排行榜提交了代码之后,等了大约1天吧,代码被团队接受。表现超过了GCNII,位列18。
https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv