video-caffe 搭建3DCNN并训练UCF-101例子



video-caffe github地址:https://github.com/chuckcho/video-caffe

编译过程:

Key steps to build video-caffe are:

  • 1.git clone https://github.com/chuckcho/video-caffe.git
  • 2.cd video-caffe
  • 3.cp Makefile.config.example Makefile.config
  • 4.Make sure CUDA and CuDNN are detected and their paths are correct.(去掉 use cudnn :=1前面的#号)
  • 5.make all -j8(线程数)

编译完成。


UCF-101 training demo

Scripts and training files for C3D training on UCF-101 are located in examples/c3d_ucf101/. Steps to train C3D on UCF-101:

  • 1.Download UCF-101 dataset from UCF-101 website.
  • 2.Unzip the dataset: e.g. unrar x UCF101.rar
  • 3.(Optional) video reader works more stably with extracted frames than directly with video files. Extract frames from UCF-101 videos by revising and running a helper script,{video-caffe-root}/examples/c3d_ucf101/extract_UCF-101_frames.sh
    (也可用ffmpeg命令)
  • 4.Change ${video-caffe-root}/examples/c3d_ucf101/c3d_ucf101_{train,test}_split1.txt to correctly point to UCF-101 videos or directories that contain extracted frames.
  • 5.Modify ${video-caffe-root}/examples/c3d_ucf101/c3d_ucf101_train_test.prototxt to your taste or HW specification. Especially batch_size may need to be adjusted for the GPU memory.
  • 6.Run training script: e.g. cd ${video-caffe-root} && examples/c3d_ucf101/train_ucf101.sh (optionally use –gpu to use multiple GPU’s)
  • 7.(Optional) Occasionally run ${video-caffe-root}/tool/extra/plot_training_loss.sh to get training loss / validation accuracy (top1/5) plot. It’s pretty hacky, so look at the file to meet your need.(caffe/tools/extra/plot_training_log.py.example自带绘图的小脚本)
  • 8.At 7 epochs of training, clip accuracy should be around 45%.

猜你喜欢

转载自blog.csdn.net/xunan003/article/details/81000935