point transformer v3复现及核心代码详解
1. 复现
1.1 复现
根据源码的Github地址,下载源代码
git clone https://gitcode.com/gh_mirrors/po/PointTransformerV3.git
配置环境:
conda create -n pointcept python=3.8 -y
conda activate pointcept
conda install ninja -y
安装PyTorch,这里我是安装的torch1.11.0
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
安装环境依赖:
pip install h5py pyyaml haredarray tensorboard tensorboardx yapf addict einops scipy plyfile termcolor timm -i https://pypi.tuna.tsinghua.edu.cn/simple
其中,haredarray这个包在Windows下可能会报错,改为:
pip install git+https://github.com/imaginary-friend94/SharedNumpyArray
安装成功后,修改Pointcept/pointcept/utils/cache.py
,将SharedArray
改为:
# import SharedArray
import numpysharedarray
将代码中所有的sharedarray.attach
改为:
if os.path.exists(f"/dev/shm/{
name}"):
# return SharedArray.attach(f"shm://{name}")
return numpysharedarray.attach_mem_sh(f"shm://{
name}")
接着,继续安装其它的依赖:
conda install pytorch-cluster pytorch-scatter pytorch-sparse -c pyg -y
pip install torch-geometric
这里Windows一就会报错,选择本地whl安装
,链接,根据自己的torch以及cuda版本选择对应的包:
pip install torch_cluster-1.6.0-cp38-cp38-win_amd64.whl
pip install torch_cluster-1.6.2+pt21cu118-cp38-cp38-win_amd64.whl
pip install torch_scatter-2.0.9-cp38-cp38-win_amd64.whl
# 上述whl安装好,正常安装torch-geometric
pip install torch-geometric
然后,安装pointops,也是window中很容易报错的:
cd Pointcept/libs/pointops
python setup.py install
如果装不上去,报错:AttributeError: ‘NoneType’ object has no attribute ‘split’:
Traceback (most recent call last):
File “setup.py”, line 8, in
flag for flag in opt.split() if flag != “-Wstrict-prototypes”
AttributeError: ‘NoneType’ object has no attribute ‘split’
Pointcept/libs/pointops/setup.py
中将这块代码进行注释
(opt,) = get_config_vars("OPT")
# os.environ["OPT"] = " ".join(
# flag for flag in opt.split() if flag != "-Wstrict-prototypes"
# )
最后,根据cuda的版本安装稀疏卷积spconv
:
pip install spconv-cu113
1.2 数据预处理
这里以S3DIS场景点云数据集为例,进行数据处理,可以在链接中下载该数据集。
我这里下载的是Stanford3dDataset_v1.2。
# S3DIS without aligned angle
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${
S3DIS_DIR} --output_root ${
PROCESSED_S3DIS_DIR}
# S3DIS with aligned angle
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${
S3DIS_DIR} --output_root ${
PROCESSED_S3DIS_DIR} --align_angle
# S3DIS with normal vector (recommended, normal is helpful)
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${
S3DIS_DIR} --output_root ${
PROCESSED_S3DIS_DIR} --raw_root ${
RAW_S3DIS_DIR} --parse_normal
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${
S3DIS_DIR} --output_root ${
PROCESSED_S3DIS_DIR} --raw_root ${
RAW_S3DIS_DIR} --align_angle --parse_normal
–dataset_root指定下载好的数据集路径,–output_root指定数据预处理后存放的路径。
1.3 跑通
这里,我选择Pointcept/configs/s3dis/semseg-pt-v3m1-1-rpe.py
作为模型的配置文件。
训练脚本文件位于Pointcept/tools/train.py
。
将配置文件中的数据集路径更改为上述预处理好的路径:
开始训练:
cd Pointcept/tools
python train.py --config-file D:\PointTransformerV3\Pointcept\configs\s3dis\semseg-pt-v3m1-1-rpe.py
开始训练的时候,可能会报错:AssertionError: channel size mismatch
将配置文件中的,模型的backbone输入通道数改为3,搞定!
到此,成功跑通!!!
2. 核心代码详解
整个框架全部以Pointcept/pointcept/engines/train.py
的Trainer类
为主,类初始化方法中有核心部分:
构建实例化模型:
self.model = self.build_model()
构建日志保存:
self.writer = self.build_writer()
构建dataloder:
self.train_loader = self.build_train_loader()
构建优化器:
self.optimizer = self.build_optimizer()
self.scheduler = self.build_scheduler()
训练脚本train,也作为该类的成员方法。
2.1 读取数据
build_train_loader()方法的代码如下:
train_data = build_dataset(self.cfg.data.train)就是构建dataset。
构造dataset类
的起始文件是Pointcept/pointcept/datasets/builder.py,这里cfg就是对应的上述配文件。
随后,dataset类
在Pointcept/pointcept/datasets/defaults.py中,
这块类的初始化方法中,主要是self.get_data_list()方法
。会将数据集路径的地址全部读取,并存放在list中。
2.2 dataloder
dataloder,就是正常的步骤,将构建好的dadaset传到Pytorch官方提供的dataloder:
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=self.cfg.batch_size_per_gpu,
shuffle=(train_sampler is None),
num_workers=self.cfg.num_worker_per_gpu,
sampler=train_sampler,
collate_fn=partial(point_collate_fn, mix_prob=self.cfg.mix_prob),
pin_memory=True,
worker_init_fn=init_fn,
drop_last=True,
persistent_workers=True,
)
2.3 模型读取数据的逻辑
实例化的datasrt类继承了Pointcept/pointcept/datasets/defaults.py中的DefaultDataset类,所有方法均在该父类中。
下面,首先看__getitem__方法:
def __getitem__(self, idx):
if self.test_mode:
return self.prepare_test_data(idx)
else:
return self.prepare_train_data(idx)
接下里是,prepare_train_data方法:
该方法嵌套了两个类方法。
1.get_data
方法,就是将数据集中的.npy文件用numpy读取出来,并转float32类型:
get_data
方法最终将数据集中的4个npy文件全部读出来:
2.transform
方法,就是将数据集中的读取的.npy文件进行预处理:
最终,预处理好的如下,offset
就是数据的第一个维度。
2.4 forward
前向传播的入口在Pointcept/pointcept/engines/train.py
中的run_step函数
,主要就是将数据加载到推理设备上,并进行前向传播。
具体的前向传播,在Pointcept/pointcept/models/default.py
中的forward函数
中。
2.4.1 Point
首先,先是Point(input_dict)
,Point这个类。
将offset转换为batch。
转换的具体代码如下:
2.4.2 backbone
接下来就是point transformer v3主干特征提取了。文件路径为Pointcept/pointcept/models/point_transformer_v3/point_transformer_v3m1_base.py
。
首先,Point(data_dict)
,和上述一样,上述已经生成,这里就直接return出来了。
2.4.2.1 point.serialization
下面,就简要介绍一下配置参数中order=[“z”, “z-trans”, “hilbert”, “hilbert-trans”]的4种空间填充曲线
PTv3使用空间填充曲线,如Z-order和Hilbert曲线,来遍历三维空间中的点。这些曲线能够在保持点之间空间邻近性的同时,将点映射到一个高维离散空间中。数学上,空间填充曲线可以定义为一个双射函数φ: Z^n → Z^m,其中n是空间的维度(对于点云通常是3),m是映射到的高维空间的维度。
具体细节代码位于Pointcept/pointcept/models/utils/structure.py
,如下:
def serialization(self, order="z", depth=None, shuffle_orders=False):
"""
Point Cloud Serialization
relay on ["grid_coord" or "coord" + "grid_size", "batch", "feat"]
"""
assert "batch" in self.keys()
if "grid_coord" not in self.keys():
# if you don't want to operate GridSampling in data augmentation,
# please add the following augmentation into your pipline:
# dict(type="Copy", keys_dict={"grid_size": 0.01}),
# (adjust `grid_size` to what your want)
assert {
"grid_size", "coord"}.issubset(self.keys())
self["grid_coord"] = torch.div(
self.coord - self.coord.min(0)[0], self.grid_size, rounding_mode="trunc"
).int()
if depth is None:
# Adaptive measure the depth of serialization cube (length = 2 ^ depth)
depth = int(self.grid_coord.max()).bit_length()
self["serialized_depth"] = depth
# Maximum bit length for serialization code is 63 (int64)
assert depth * 3 + len(self.offset).bit_length() <= 63
# Here we follow OCNN and set the depth limitation to 16 (48bit) for the point position.
# Although depth is limited to less than 16, we can encode a 655.36^3 (2^16 * 0.01) meter^3
# cube with a grid size of 0.01 meter. We consider it is enough for the current stage.
# We can unlock the limitation by optimizing the z-order encoding function if necessary.
assert depth <= 16
# The serialization codes are arranged as following structures:
# [Order1 ([n]),
# Order2 ([n]),
# ...
# OrderN ([n])] (k, n)
code = [
encode(self.grid_coord, self.batch, depth, order=order_) for order_ in order
]
code = torch.stack(code)
order = torch.argsort(code)
inverse = torch.zeros_like(order).scatter_(
dim=1,
index=order,
src=torch.arange(0, code.shape[1], device=order.device).repeat(
code.shape[0], 1
),
)
if shuffle_orders:
perm = torch.randperm(code.shape[0])
code = code[perm]
order = order[perm]
inverse = inverse[perm]
self["serialized_code"] = code
self["serialized_order"] = order
self["serialized_inverse"] = inverse
2.4.2.2 稀疏化
具体细节代码位于Pointcept/pointcept/models/utils/structure.py
,主要就是利用spconv系数卷积:
def sparsify(self, pad=96):
"""
Point Cloud Serialization
Point cloud is sparse, here we use "sparsify" to specifically refer to
preparing "spconv.SparseConvTensor" for SpConv.
relay on ["grid_coord" or "coord" + "grid_size", "batch", "feat"]
pad: padding sparse for sparse shape.
"""
assert {
"feat", "batch"}.issubset(self.keys())
if "grid_coord" not in self.keys():
# if you don't want to operate GridSampling in data augmentation,
# please add the following augmentation into your pipline:
# dict(type="Copy", keys_dict={"grid_size": 0.01}),
# (adjust `grid_size` to what your want)
assert {
"grid_size", "coord"}.issubset(self.keys())
self["grid_coord"] = torch.div(
self.coord - self.coord.min(0)[0], self.grid_size, rounding_mode="trunc"
).int()
if "sparse_shape" in self.keys():
sparse_shape = self.sparse_shape
else:
sparse_shape = torch.add(
torch.max(self.grid_coord, dim=0).values, pad
).tolist()
sparse_conv_feat = spconv.SparseConvTensor(
features=self.feat,
indices=torch.cat(
[self.batch.unsqueeze(-1).int(), self.grid_coord.int()], dim=1
).contiguous(),
spatial_shape=sparse_shape,
batch_size=self.batch[-1].tolist() + 1,
)
self["sparse_shape"] = sparse_shape
self["sparse_conv_feat"] = sparse_conv_feat
2.4.2.3 embedding
embedding
进去后,Pointcept/pointcept/models/point_transformer_v3/point_transformer_v3m1_base.py
中的代码如下:
self.stem就是卷积+BN+Gelu激活函数
。
2.4.2.4 encoder
主要的特征提取位于Pointcept/pointcept/models/point_transformer_v3/point_transformer_v3m1_base.py
中。
self.cpe
(point)主要是由卷积+FC+层归一化组成
。
核心就self.atten
,代码位于Pointcept/pointcept/models/point_transformer_v3/point_transformer_v3m1_base.py
中。
其中,多头注意力机制的计算QKV
的核心还是和vision transformer的很类似。
def forward(self, point):
if not self.enable_flash:
self.patch_size = min( #128
offset2bincount(point.offset).min().tolist(), self.patch_size_max
)
H = self.num_heads #2
K = self.patch_size #128
C = self.channels #32
pad, unpad, cu_seqlens = self.get_padding_and_inverse(point)
order = point.serialized_order[self.order_index][pad]
inverse = unpad[point.serialized_inverse[self.order_index]]
# padding and reshape feat and batch for serialized point patch #[158424]
qkv = self.qkv(point.feat)[order]
if not self.enable_flash:
# encode and reshape qkv: (N', K, 3, H, C') => (3, N', H, K, C')
q, k, v = (
qkv.reshape(-1, K, 3, H, C // H).permute(2, 0, 3, 1, 4).unbind(dim=0)
)
# attn
if self.upcast_attention:
q = q.float()
k = k.float()
attn = (q * self.scale) @ k.transpose(-2, -1) # (N', H, K, K)
if self.enable_rpe:
attn = attn + self.rpe(self.get_rel_pos(point, order))
if self.upcast_softmax:
attn = attn.float()
attn = self.softmax(attn)
attn = self.attn_drop(attn).to(qkv.dtype)
feat = (attn @ v).transpose(1, 2).reshape(-1, C)
else:
feat = flash_attn.flash_attn_varlen_qkvpacked_func(
qkv.half().reshape(-1, 3, H, C // H),
cu_seqlens,
max_seqlen=self.patch_size,
dropout_p=self.attn_drop if self.training else 0,
softmax_scale=self.scale,
).reshape(-1, C)
feat = feat.to(qkv.dtype)
feat = feat[inverse]
# ffn
feat = self.proj(feat)
feat = self.proj_drop(feat)
point.feat = feat
return point