1.报错 ValueError: signal only works in main thread
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\base.py", line 240, in net_initialize
pretrain_weights = get_pretrain_weights(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\utils\pretrain_weights.py", line 208, in get_pretrain_weights
import paddlehub as hub
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\__init__.py", line 30, in <module>
from . import dataset
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\dataset\__init__.py", line 24, in <module>
from .squad import SQUAD
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\dataset\squad.py", line 20, in <module>
from paddlehub.reader import tokenization
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\reader\__init__.py", line 22, in <module>
from .cv_reader import ImageClassificationReader
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\reader\cv_reader.py", line 26, in <module>
from ..contrib.ppdet.data.reader import Reader
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\contrib\ppdet\data\reader.py", line 28, in <module>
from .transform import build_mapper, map, batch, batch_map
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\contrib\ppdet\data\transform\__init__.py", line 24, in <module>
from .parallel_map import ParallelMappedDataset
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlehub\contrib\ppdet\data\transform\parallel_map.py", line 229, in <module>
signal.signal(signal.SIGTERM, _reader_exit)
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
This error is because I started the training in the child thread in the code, and the signal. It can only be used in the main thread. So it was reported.
I commented it out here, and then works_nums = 1 when loading the data set This avoids multi-threaded loading of data. This bug has been submitted.
The solution is to comment out the line signal.signal(signal.SIGTERM, _reader_exit), and in the dataloader parameter,
train_dataset = pdx.datasets.CocoDetection(
data_dir= option.trainDataDir,
num_workers=1, # 这里设置为1
ann_file= os.path.join(option.trainDataDir,'train_annotations.json'), # 'xiaoduxiong_ins_det/train.json',
transforms=train_transforms,
shuffle=True)
2. suggested that the lack cublas64_100.dll
to what can be downloaded from the website
https://www.dll-files.com/download/0e506d21dd9e1be9d60d2ad215af943f/cublas64_100.dll.html?c=eGFndlZTdzk3VGVpNk13Z09DanBLZz09
the Internet under a put c: \ window \ system32 It's ok
3. The following error, a non-existent category was found during verification. Exceeding the index
Traceback (most recent call last):
File "f:\project\AI\ai.cycleblock.cn\common\TrainThread_Paddlex.py", line 135, in main
self.starttrain(opt)
File "f:\project\AI\ai.cycleblock.cn\common\TrainThread_Paddlex.py", line 257, in starttrain
model.train(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\mask_rcnn.py", line 220, in train
self.train_loop(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\base.py", line 557, in train_loop
self.eval_metrics, self.eval_details = self.evaluate(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\mask_rcnn.py", line 316, in evaluate
ap_stats, eval_details = eval_results(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\utils\detection_eval.py", line 56, in eval_results
box_ap_stats, xywh_results = coco_bbox_eval(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\utils\detection_eval.py", line 119, in coco_bbox_eval
xywh_results = bbox2out(
File "F:\ProgramData\Anaconda3\envs\yolo5\lib\site-packages\paddlex\cv\models\utils\detection_eval.py", line 312, in bbox2out
catid = (clsid2catid[int(clsid)])
KeyError: 12
After exclusion, it was found that the number of categories in my validation set annotation file.val_annotations.json was less than the number of categories in train_annotations.json.
Solution Modify the code logic of these two files., The categories in the two files are generated exactly the same. It is solved