【DL(3)】 caffe2 + detectron

1. "升级"cudnn

因为最近要使用detectron,而其官方建议是cudnn6+,而之前因为使用py-faster-rcnn安装的是cudnn5.0,所以需要从5.0升级到6.0版本。

说是升级,实际上就是吧5.0版本的头文件和库删了,然后将6.0的头文件和库放进去。。。

将下面这三个库文件删除:

删除方法是打开终端:

    cd /usr/local/cuda-8.0/lib64/ 
    sudo rm -rf libcudnn.so libcudnn.so.5 libcudnn.so.5.0.5
    cd /usr/local/cuda-8.0/include/
    sudo rm -rf cudnn.h

然后将已经下载好的cudnn6版本的相应文件拷贝进去:

7910:~/Downloads/cudnn-8.0-linux-x64-v6.0-ga/lib64$ sudo cp lib* /usr/local/cuda-8.0/lib64/
7910:~/Downloads/cudnn-8.0-linux-x64-v6.0-ga/include$ sudo cp cudnn.h /usr/local/cuda-8.0/include/

然后建立软连接:

    7910:/usr/local/cuda-8.0/lib64$ sudo chmod +r libcudnn.so.6.0.21
    7910:/usr/local/cuda-8.0/lib64$ sudo ln -sf libcudnn.so.6.0.21 libcudnn.6
    7910:/usr/local/cuda-8.0/lib64$ sudo ln -sf libcudnn.so.6 libcudnn.so
    7910:/usr/local/cuda-8.0/lib64$ sudo ldconfig

ldconfig这一步完了提示说libcudnn.so.6不是一个sybolic link,不造啥意思,暂时忽略继续下面。

检验是否成功换版本:

但是不知道为什么在终端输入locate libcudnn.so的时候显示的还是老版本(可能需要重启?):

2. 以前编译的py-faster-rcnn重新编译

因为改了cudnn版本,以前py-faster-rcnn是基于5.0的,现在是6.0了肯定出错,试了下demo果不其然:

重新编译下py-faster-rcnn,就可以用啦!

7910:~/py-faster-rcnn/caffe-fast-rcnn$ make clean
7910:~/py-faster-rcnn/caffe-fast-rcnn$ make -j16
7910:~/py-faster-rcnn/caffe-fast-rcnn$ make pycaffe -j16

3. 安装caffe2

https://caffe2.ai/docs/getting-started.html?platform=ubuntu&configuration=compile

升级了cudnn后,开始装caffe2,发现下载的caffe2是在torch里面,这个结构就以后慢慢研究吧,也不知道用了torch的多少东西,是否相对于torch独立,以及为什么要放在torch下面,这个以后有时间了再慢慢看。

(1) 安装依赖库

因为之前装py-faster-rcnn, 编译过这个版本的caffe,所以很多库已经装过了,只需要装如下的库:

    7910:~$ sudo apt-get install -y --no-install-recommends \
    > libgtest-dev \
    > libiomp-dev \
    > openmpi-doc 
    7910:~$ sudo pip install \
    > future 

(2)按照官网安装

注意:cmake里面nccL设为off(因为我只有一块GPU)

# Clone Caffe2's source code from our Github repository
git clone --recursive https://github.com/pytorch/pytorch.git && cd pytorch
git submodule update --init

# Create a directory to put Caffe2's build files in
mkdir build && cd build

# Configure Caffe2's build
# This looks for packages on your machine and figures out which functionality
# to include in the Caffe2 installation. The output of this command is very
# useful in debugging.
cmake .. 
#cmake结束后,可以去build文件夹下面找Caffe2Config.cmake双击打开,更改设置后重新configure和generate。当然也可以在pytorch下面的CMakeLists.txt中直接修改设置

# Compile, link, and install Caffe2
sudo make install

因为之前装了py-faster-rcnn,里面有caffe,因此许多依赖库已经装上了,也给我带来了一系列的麻烦。

下面详细说下问题产生的过程和解决方法:

就按照官网的提示安装的,非常简洁:https://caffe2.ai/docs/getting-started.html?platform=ubuntu&configuration=compile

但是由于上述原因,麻烦接踵而至:

麻烦1:cmake时,首先就找不到opencv,如下图。

大神说是我的系统里有不同的opencv版本导致的。之前的caffe貌似用的是opencv3.3.0,但是我的系统是2.4.9。至于为什么之前的caffe就没出错,我现在也懒得找原因了。在编译了caffe2的时候就出现这个找不到opencv库的问题。

解决方法是重新build了opencv然后sudo make install,然后在pytorch文件夹下面的CMakeLists.txt里面加上了(忘记是哪个了,应该是上面这句):

set(OpenCV_DIR "/usr/local/lib")
#set(OpenCV_DIR "/home/yexin/Downloads/opencv-3.3.0/build")

但需要说明的是,当时解决了问题。后面我把其他问题解决了,就把这句注释了,还是无错通过了。。。。

麻烦2:找不到eigen

在cmake的时候会报错,说在某个路径下找不eigen。这个很奇怪,明明在3方库文件夹里有eigen,但是就是怎么设置也找不到。因此只好重新下载了一个eigen3.3.4,然后放在了错误提示找不到的这个路径下面,重新cmake就找到了(新下载的cmake了一下,但是并没有make install?这个不记得了是大神帮解决的,后来回去文件夹并没有找到编译生成的库什么的,猜想只是放在文件夹下即可)。

麻烦3:protobuf版本问题

解决了上面的麻烦,cmake是不出错了,但是我sudo make install 后,测试了下面caffe2是否成功的代码,一直显示failure:

cd ~ && python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"

然后运行这个代码,就会报错。

python caffe2/python/operator_test/relu_op_test.py

显示的错误是说:__init__() got an unexpected keyword argument 'syntax'

我猜想这是那个init文件(在pytorch/caffe2/路径下)和python的文件不在一处,因此调用python时找不到这个init文件就报错了。但是搜索网上说是protobuf的版本问题。于是尝试重新安装protobuf(但是其实重新安装前,看系统里的protobuf是3.5.0,就是我准备重新安装的版本,然而重新安装确实解决了这个问题。。。原因见下面)

卸载已有protobuf,重新安装:

sudo pip uninstall protobuf
cd pytorch
cd third_party
cd protobuf
cd cmake
mkdir -p build
cd build
cmake .. \
-DCMAKE_INSTALL_PREFIX=$HOME/c2_tp_protobuf \
-Dprotobuf_BUILD_TESTS=OFF -DCMAKE_CXX_FLAGS="-fPIC"
sudo make install

有的说需要改环境变量,但是我改了也不对,最后就没改环境变量。

解决问题的关键,在于让Python环境认识它:

cd pytorch/third_party/protobuf/python
python setup.py build
sudo python setup.py install

因为我是重新安装了protobuf才执行的上面的setup.py的build,问题得到解决。因此我猜想可能都不需要重装,只是install一下setup.py就可以解决问题了。

最终成功的cmake显示为:

yexin@yexin-Precision-Tower-7910:~/pytorch/build$ cmake ..
-- The CXX compiler identification is GNU 5.4.0
-- The C compiler identification is GNU 5.4.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
-- Build type not set - defaulting to Release
-- Performing Test CAFFE2_LONG_IS_INT32_OR_64
-- Performing Test CAFFE2_LONG_IS_INT32_OR_64 - Success
-- Does not need to define long separately.
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE - Success
-- NUMA is available
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Success
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extention. Will build perfkernels.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/yexin/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Found Git: /usr/bin/git (found version "2.7.4") 
-- The BLAS backend of choice:Eigen
-- Brace yourself, we are building NNPACK
-- The ASM compiler identification is GNU
-- Found assembler: /usr/bin/cc
-- Found PythonInterp: /usr/bin/python (found version "2.7.12") 
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Caffe2: Cannot find gflags automatically. Using legacy find.
-- Found gflags: /usr/include  
-- Caffe2: Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Caffe2: Cannot find glog automatically. Using legacy find.
-- Found glog: /usr/include  
-- Caffe2: Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- Found CUDA: /usr/local/cuda (found suitable exact version "8.0") 
-- OpenCV found (/usr/local/share/OpenCV)
-- Found LMDB: /usr/include  
-- Found lmdb    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
-- Found LevelDB: /usr/include  
-- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so)
-- Found Snappy: /usr/include  
-- Found Snappy  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libsnappy.so)
-- Found Numa: /usr/include  
-- Found Numa  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnuma.so)
-- Found system Eigen at /usr/local/include/eigen3
-- Found PythonInterp: /usr/bin/python (found suitable version "2.7.12", minimum required is "2.7") 
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable version "2.7.12", minimum required is "2.7") 
-- Found NumPy: /usr/lib/python2.7/dist-packages/numpy/core/include (found version "1.11.0") 
-- NumPy ver. 1.11.0 found (include: /usr/lib/python2.7/dist-packages/numpy/core/include)
-- Could NOT find pybind11 (missing:  pybind11_INCLUDE_DIR) 
-- Found MPI_C: /usr/lib/openmpi/lib/libmpi.so  
-- Found MPI_CXX: /usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so  
-- MPI support found
-- MPI compile flags: 
-- MPI include path: /usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent/usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent/include/usr/lib/openmpi/include/usr/lib/openmpi/include/openmpi
-- MPI LINK flags path:  -Wl,-rpath  -Wl,/usr/lib/openmpi/lib  -Wl,--enable-new-dtags
-- MPI libraries: /usr/lib/openmpi/lib/libmpi_cxx.so/usr/lib/openmpi/lib/libmpi.so
CMake Warning at cmake/Dependencies.cmake:421 (message):
  OpenMPI found, but it is not built with CUDA support.
Call Stack (most recent call first):
  CMakeLists.txt:184 (include)


-- Found CUDA: /usr/local/cuda (found suitable version "8.0", minimum required is "7.0") 
-- Caffe2: CUDA detected: 8.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 8.0
-- Found CUDNN: /usr/local/cuda/include  
-- Found cuDNN: v6.0.21  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s): 6.1 
-- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61
-- Could NOT find CUB (missing:  CUB_INCLUDE_DIR) 
-- Could NOT find Gloo (missing:  Gloo_INCLUDE_DIR Gloo_LIBRARY) 
-- MPI include path: /usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent/usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent/include/usr/lib/openmpi/include/usr/lib/openmpi/include/openmpi
-- MPI libraries: /usr/lib/openmpi/lib/libmpi_cxx.so/usr/lib/openmpi/lib/libmpi.so
-- CUDA detected: 8.0
-- Found libcuda: /usr/local/cuda/lib64/stubs/libcuda.so
-- Found libnvrtc: /usr/local/cuda/lib64/libnvrtc.so
CMake Warning at cmake/Dependencies.cmake:648 (message):
  mobile opengl is only used in android or ios builds.
Call Stack (most recent call first):
  CMakeLists.txt:184 (include)


CMake Warning at cmake/Dependencies.cmake:724 (message):
  Metal is only used in ios builds.
Call Stack (most recent call first):
  CMakeLists.txt:184 (include)


-- GCC 5.4.0: Adding gcc and gcc_s libs to link line
-- NCCL operators skipped due to no CUDA support
-- Excluding ideep operators as we are not using ideep
-- Including image processing operators
-- Excluding video processing operators due to no opencv
-- Excluding mkl operators as we are not using mkl
-- Include Observer library
-- Using lib/python2.7/dist-packages as python relative installation path
-- Automatically generating missing __init__.py files.
CMake Warning at CMakeLists.txt:344 (message):
  Generated cmake files are only fully tested if one builds with system glog,
  gflags, and protobuf.  Other settings may generate files that are not well
  tested.


-- 
-- ******** Summary ********
-- General:
--   CMake version         : 3.5.1
--   CMake command         : /usr/bin/cmake
--   Git version           : v0.1.11-9124-g77484d9-dirty
--   System                : Linux
--   C++ compiler          : /usr/bin/c++
--   C++ compiler version  : 5.4.0
--   BLAS                  : Eigen
--   CXX flags             :  -fvisibility-inlines-hidden -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-error=deprecated-declarations
--   Build type            : Release
--   Compile definitions   : 
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/local
-- 
--   BUILD_CAFFE2          : ON
--   BUILD_ATEN            : OFF
--   BUILD_BINARY          : ON
--   BUILD_CUSTOM_PROTOBUF : ON
--     Link local protobuf : ON
--   BUILD_DOCS            : OFF
--   BUILD_PYTHON          : ON
--     Python version      : 2.7.12
--     Python includes     : /usr/include/python2.7
--   BUILD_SHARED_LIBS     : ON
--   BUILD_TEST            : OFF
--   USE_ASAN              : OFF
--   USE_ATEN              : OFF
--   USE_CUDA              : ON
--     CUDA static link    : OFF
--     USE_CUDNN           : ON
--     CUDA version        : 8.0
--     cuDNN version       : 6.0.21
--     CUDA root directory : /usr/local/cuda
--     CUDA library        : /usr/local/cuda/lib64/stubs/libcuda.so
--     cudart library      : /usr/local/cuda/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
--     cublas library      : /usr/local/cuda/lib64/libcublas.so;/usr/local/cuda/lib64/libcublas_device.a
--     cufft library       : /usr/local/cuda/lib64/libcufft.so
--     curand library      : /usr/local/cuda/lib64/libcurand.so
--     cuDNN library       : /usr/local/cuda/lib64/libcudnn.so
--     nvrtc               : /usr/local/cuda/lib64/libnvrtc.so
--     CUDA include path   : /usr/local/cuda/include
--     NVCC executable     : /usr/local/cuda/bin/nvcc
--     CUDA host compiler  : /usr/bin/cc
--     USE_TENSORRT        : OFF
--   USE_ROCM              : OFF
--   USE_EIGEN_FOR_BLAS    : ON
--   USE_FFMPEG            : OFF
--   USE_GFLAGS            : ON
--   USE_GLOG              : ON
--   USE_GLOO              : ON
--     USE_GLOO_IBVERBS    : OFF
--   USE_LEVELDB           : ON
--     LevelDB version     : 1.18
--     Snappy version      : 1.1.3
--   USE_LITE_PROTO        : OFF
--   USE_LMDB              : ON
--     LMDB version        : 0.9.17
--   USE_METAL             : OFF
--   USE_MKL               : 
--   USE_MOBILE_OPENGL     : OFF
--   USE_MPI               : ON
--   USE_NCCL              : OFF
--   USE_NERVANA_GPU       : OFF
--   USE_NNPACK            : ON
--   USE_OBSERVERS         : ON
--   USE_OPENCL            : OFF
--   USE_OPENCV            : ON
--     OpenCV version      : 3.3.0
--   USE_OPENMP            : OFF
--   USE_PROF              : OFF
--   USE_REDIS             : OFF
--   USE_ROCKSDB           : OFF
--   USE_ZMQ               : OFF
--   Public Dependencies  : Threads::Threads;gflags;glog::glog
--   Private Dependencies : nnpack;cpuinfo;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_videoio;opencv_video;/usr/lib/x86_64-linux-gnu/liblmdb.so;/usr/local/lib;/usr/lib/x86_64-linux-gnu/libleveldb.so;/usr/lib/x86_64-linux-gnu/libsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;gcc_s;gcc;dl
-- Configuring done
WARNING: Target "caffe2" requests linking to directory "/usr/local/lib".  Targets may link only to libraries.  CMake is dropping the item.
WARNING: Target "caffe2" requests linking to directory "/usr/local/lib".  Targets may link only to libraries.  CMake is dropping the item.
-- Generating done
-- Build files have been written to: /home/yexin/pytorch/build

上面两个warning查了下,也没啥解决方案,忽略了直接sudo make install -j30,成功安装caffe2。

测试下是否成功:

然而运行下面的测试还是失败:

原因是没有添加环境变量。添加了环境变量就好了,做法如下:

首先用gedit打开环境变量设置文件:

sudo gedit ~/.bashrc

打开后,在文件的最后加入如下几行:

然后保存关闭文件,在终端输入:

source ~/.bashrc

环境变量就生效了,此时再次测试成功。

后话:最终成功安装,我的环境变量见下面:

export PYTHONPATH=/usr/local:$PYTHONPATH:home/yexin/pytorch/build:home/yexin/pytorch/build/caffe2
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

但是测试这个失败了,说是找不到hypothesi模块。

Tower-7910:~/pytorch/build$ python caffe2/python/operator_test/relu_op_test.py

麻烦4:

运行这个test程序有问题,说是找不到hypothesis这个模块,于是就pip install,然后忘记什么原因,脑残的升级了一下pip,发现新版pip和以往不同如下:

#old way:
pip install hypothesis
#new way:
pip install --user hypothesis

如果不加--user,就会报错说被denied了安装失败。

如果还不行,我是因为之前装过pip,升级后新的pip不知道是冲突了还是怎么的,因此我先把原来的remove了:

sudo apt-get remove python-pip

然后输入pip回车,发现还有pip。。。看来是升级后又装了一套,然后两个版本冲突?汗一个

现在运行这个test程序,可以运行了:

提示说caffe2没有和avx一起编译,所以可能不能利用到CPU的全速。

不造为啥总提示说cudnn not available,搜了一下目前貌似也没有官方解决办法。。。心好累

跑完是这样的:

4. detectron安装

妈啊终于到这里了....因为最近几个项目交错,中间跑去干别的,从第一步现在到这里已经过去了两周,一把辛酸泪.....

(1) COCOAPI安装

git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
sudo make install

装完如下:

(2)detectron安装

git clone https://github.com/facebookresearch/detectron $DETECTRON
cd detectron
pip install --user -r requirements.txt
sudo make

测试是否安装成功:

python2 detectron/tests/test_spatial_narrow_as_op.py

如下:

(3)用已有的例子跑个mask-rcnn例子

测试Mask RCNN的代码为:/home/yexin/detectron/tools/infer_simple.py。这里需要预先下载两个模型(weights)为rcnn和mask的两个model?,因为网速太渣,先下载好会比较方便。

下载地址:model_final.pkl

下载地址:R-101.pkl

下载之后,对于第一个model,在detectron文件夹下建立一个model文件夹,在其下再建立MaskRCNN文件夹,放进去;

第二个model目前没有用到

yexin@yexin-Precision-Tower-7910:~/detectron$ python2 tools/infer_simple.py \
> --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml \
> --wts model/MaskRCNN/model_final.pkl \
> --output-dir tmp/detectron-visualizations \
> --image-ext jpg \
> --always-out \
> --im_or_folder demo \
> --output-ext pdf

调用的这些是什么意思呢,可以看infer_simple.py中的详细说明,这里简单说下:

--cfg 配置文件 :主要是网络的配置文件,包括迭代次数等参数在里面设置;

--wts 权重:为训练的model即网络的权值;

--output-dir 结果路径:为语义识别的结果影像文件的路径,这里我在detectron路径下建立了/tmp/detectron-visulizations 文件夹;

--image-ext 后缀名:为待识别影像文件的后缀名,默认为jpg;

--always-out:这个设置后,即使影像没有目标被检测到,仍作为结果影像输出,输出的和原始影像应该是一样的;

--im_or_folder 影像路径:为待检测的影像路径,或者文件夹路径,如果给定文件夹,则对其下每个设置的后缀名为jpg(上一个参数设置的后缀名)的影像进行识别;这里设置为对demo文件夹下所有jpg影像进行识别;

--output-ext 后缀名: 为结果影像文件的后缀名,默认为pdf

得到的结果如下:

比如其中一幅影像的语义识别结果如下:

(4)简单测试输入图像大小限制

因为用caffe一代的时候,会首先根据影像size执行一次网络容量检查,如果网络太深,blob数据量就会超过GPU的容量限制,就会报错从而停止在检查阶段。所以担心caffe2会不会也有类似问题,试了从网上down了一幅5k影像包含一艘正面的船,size为:5120*2880,一幅4k的跑车图,size为:3840*2160。并将这个5k影像分别亚采样成3k,2k, 1.8k, 1.6k, 1.4k, 1.2k, 1k.然后进行测试,用的还是上面的模型输入。

发现速度依然不错(主要是目标少),但是起码可以处理这么大的影像,还不错。我决定再试下更大的影像,比如地图。

第一幅会慢一些:

ps:发现这个船在大于等于1.8k时才会被检测出。。。

(5)用自己的数据应用mask-rcnn

参考:

简单实现所有步骤:https://blog.csdn.net/u014525760/article/details/79931485

超强博主:https://blog.csdn.net/Xiongchao99/article/details/79106588

https://zhuanlan.zhihu.com/p/34036460

http://matti.frind.de/?p=2089

① 制作标签

② 训练和测试

③ 识别新影像

猜你喜欢

转载自blog.csdn.net/foreverhehe716/article/details/81086644
今日推荐