Research on Lane Line Image Segmentation in PaddleSeg 2.0 Dynamic Graph Mode

Table of contents

Lane line task introduction

Lane line detection related data sets

BDD100K: A Large-scale Diverse Driving Video Database

ApolloScape

Lane line detection basic preprocessing skills

image cropping

picture sharpening

Canny edge extraction

References

Lane segmentation using PaddleSeg

Unzip the dataset

Prepare dataset

Analyze data set

Calculate the proportion of pixels in each category

Class Imbalance Handling

Weighted softmax loss

Lovasz loss

train

Evaluate

predict

Export a static graph model

Python predictive deployment

Extra article: Handling data analysis and training tasks with script tasks

Python OS commands and common terminal command lines

Basic Operations of Script Tasks


Lane line task introduction

Lane line detection is a basic task in unmanned driving and map navigation scenarios. For example, the classic (similar) dash cam perspective:

In [ ]
# 视频来源 https://github.com/udacity/CarND-Advanced-Lane-Lines/blob/master/project_video.mp4
import IPython
IPython.display.Video('project_video.mp4')
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff"><IPython.core.display.Video object></span></span>

It is also considered that the detection of lane line speed is too slow through image segmentation, and it is improved: Ultra-Fast-Lane-Detection 

Lane line detection related data sets

BDD100K: A Large-scale Diverse Driving Video Database

In May 2018, the University of Berkeley AI Laboratory (BAIR) released a large-scale and diverse public driving dataset, and designed a picture annotation system. The BDD100K dataset contains 100,000 high-definition videos, each video is about 40 seconds\720p\30 fps. The key frames are sampled at the 10th second of each video, and 100,000 pictures (picture size: 1280*720) are obtained and labeled. The labeling of lane line detection is one of them. 

ApolloScape

This large-scale dataset consists of a set of video sequences, recording self-driving scenes on different city streets, including more than 110,000 frames of high-quality pixel-level annotations. 

Lane line detection basic preprocessing skills

image cropping

The reference project builds the unmanned vehicle lane line detection challenge solution from scratch as mentioned:

Through careful observation, we found that these data have a common feature, that is, the upper third of the picture is the sky, and there is no lane line. After knowing this, we can perform a cropping process. It can save the next third of the video memory at once, isn't it very cool? Here I choose a height of 690 pixels above the crop.

This phenomenon is ubiquitous in many lane line detection data sets, so it is a common technique to cut off the sky and retrain this type of data to save memory and video memory space. The general practice is to directly crop off the top half or one-third of the picture.

In [26]
from PIL import Image, ImageFilter
import cv2
import numpy as np
def crop_data(input_path,output_path):
    img = Image.open(input_path)
    print(img.size)
    # 裁掉图片的上半部分
    cropped = img.crop((0, (img.height/2), img.width, img.height))  # (left, upper, right, lower)
    print(cropped.size)
    cropped.save(output_path)
In [28]
crop_data('./crop_input.png','./crop_output.png')

picture sharpening

In [ ]
def sharp_data(input_path,output_path):
    img = Image.open(input_path)
    print(img.size)
    # 两次锐化
    img = img.filter(ImageFilter.SHARPEN)  
    img = img.filter(ImageFilter.SHARPEN)  
    img.save(output_path)
In [ ]
sharp_data('./crop_input.png','./sharp_output.png')
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">(431, 241)
</span></span>

 

Canny edge extraction

In [ ]
img = cv2.imread('crop_input.png', cv2.COLOR_BGR2GRAY)
In [ ]
low_threshold = 40
high_threshold = 150
canny_image = cv2.Canny(img, low_threshold, high_threshold)
In [ ]
cv2.imwrite('canny.jpg', canny_image)
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">True</span></span>

Lane segmentation using PaddleSeg

Unzip the dataset

In [6]
!unzip data/data68698/智能车数据集.zip
In [ ]
# !git clone https://gitee.com/paddlepaddle/PaddleSeg.git
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">Cloning into 'PaddleSeg'...
remote: Enumerating objects: 10912, done.
remote: Counting objects: 100% (10912/10912), done.
remote: Compressing objects: 100% (5400/5400), done.
remote: Total 10912 (delta 7379), reused 8171 (delta 5353), pack-reused 0
Receiving objects: 100% (10912/10912), 156.94 MiB | 21.09 MiB/s, done.
Resolving deltas: 100% (7379/7379), done.
Checking connectivity... done.
</span></span>

Prepare dataset

PaddleSeg currently supports the loading of datasets such as CityScapes, ADE20K, and Pascal VOC. When loading a dataset, if the corresponding data does not exist locally, the download will be automatically triggered (except for the Cityscapes dataset). Here you can directly use the script provided by the competition

In [7]
%run make_list.py
# !python make_list.py
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">[('image_4000/3199.png', 'mask_4000/3199.png'), ('image_4000/3590.png', 'mask_4000/3590.png'), ('image_4000/3685.png', 'mask_4000/3685.png')]
4000
</span></span>
In [1]
%set_env CUDA_VISIBLE_DEVICES=0
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">env: CUDA_VISIBLE_DEVICES=0
</span></span>

Analyze data set

@BIT达达鸡This big guy analyzed the data set in the [Magic Revamp] National College Student Smart Car Competition-Lane Line Detection Baseline project, and tried to balance the data set. Let's first look directly at the statistical results of each category of the dataset.

-[INFO] Label 1: 0.0170
-[INFO] Label 2: 0.4482
-[INFO] Label 3: 0.0843
-[INFO] Label 4: 0.0767
-[INFO] Label 5: 0.0334
-[INFO] Label 6: 0.2513
-[INFO] Label 7: 0.0070
-[INFO] Label 8: 0.0025
-[INFO] Label 9: 0.0158
-[INFO] Label 10: 0.0152
-[INFO] Label 11: 0.0292
-[INFO] Label 12: 0.0087
-[INFO] Label 13: 0.0061
-[INFO] Label 14: 0.0046
-[INFO] Label 15: 0.0000

You can also refer to this article to count the category distribution in the image segmentation training set by @yhl_leo . Due to the long calculation time, a script task is written here, which can be used as a template for statistical image segmentation category distribution: Image segmentation training set category distribution statistics script : Smart Car Competition - Lane Line Detection

In [ ]
import cv2, os
import numpy as np

#amount of classer
CLASSES_NUM = 15

#find imagee in folder dir
def findImages(dir,topdown=True):
    im_list = []
    if not os.path.exists(dir):
        print("Path for {} not exist!".format(dir))
        raise
    else:
        for root, dirs, files in os.walk(dir, topdown):
            for fl in files:
                im_list.append(fl)
    return im_list

# amount of images corresponding to each classes
images_count = [0]*CLASSES_NUM
# amount of pixels corresponding to each class
class_pixels_count = [0]*CLASSES_NUM
# amount of pixels corresponding to the images of each class
image_pixels_count = [0]*CLASSES_NUM

image_folder = './mask_4000'
im_list = findImages(image_folder) 

for im in im_list:
    print(im)
    cv_img = cv2.imread(os.path.join(image_folder, im), cv2.IMREAD_UNCHANGED)
    size_img = cv_img.shape
    colors = set([])
    for i in range(size_img[0]):
        for j in range(size_img[1]):
            p_value = cv_img.item(i,j)
            if not p_value < CLASSES_NUM: # check
                print(p_value)
            else:
                class_pixels_count[p_value] = class_pixels_count[p_value] + 1
                colors.add(p_value)
    im_size = size_img[0]*size_img[1]
    for n in range(CLASSES_NUM):
        if n in colors:
            images_count[n] = images_count[n] + 1
            image_pixels_count[n] = image_pixels_count[n] + im_size

print(images_count)
print(class_pixels_count)
print(image_pixels_count)

Calculate the proportion of pixels in each category

[4000, 457, 3437, 803, 481, 850, 2997, 118, 32, 182, 213, 205, 60, 24, 33]
[9284543967, 2595404, 68411608, 12860131, 11705492, 5098954, 38357780, 1066241, 383440, 2419091, 2312722, 4459757, 1332924, 927368, 709121]
[9437184000, 1078198272, 8108900352, 1894514688, 1134821376, 2005401600, 7070810112, 278396928, 75497472, 429391872, 502530048, 483655680, 141557760, 56623104, 77856768]
In [87]
t = [9284543967, 2595404, 68411608, 12860131, 11705492, 5098954, 38357780, 1066241, 383440, 2419091, 2312722, 4459757, 1332924, 927368, 709121]
In [88]
a = np.array(t).sum()
In [89]
for i in range(len(t)):
    t[i] = round(t[i]/a, 4)
In [90]
print(t)
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">[0.9838, 0.0003, 0.0072, 0.0014, 0.0012, 0.0005, 0.0041, 0.0001, 0.0, 0.0003, 0.0002, 0.0005, 0.0001, 0.0001, 0.0001]
</span></span>

Class Imbalance Handling

In image segmentation tasks, there are often situations where the distribution of categories is uneven, such as: defect detection of industrial products, road extraction, and lesion area extraction.

To solve this problem, PaddleSeg mainly provides two solutions: weighted softmax loss and Lovasz loss.

Weighted softmax loss

Weighted softmax loss is a softmax loss with different weights set by category.

cfg.SOLVER.CROSS_ENTROPY_WEIGHTUse by setting parameters.
The default is None. If set to 'dynamic', the category weights will be dynamically adjusted according to the number of categories in each batch. You can also set a static weight (list method), for example, there are 3 categories, and the weight of each category can be set to [0.1, 2.0, 0.9].

SOLVER:
    LR: 0.005
    LR_POLICY: "poly"
    OPTIMIZER: "sgd"
    NUM_EPOCHS: 40
    CROSS_ENTROPY_WEIGHT: [0.1, 2, 0.5, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2 ,2, 2] #每一个类别loss权重

Lovasz loss

Lovasz loss optimizes the mean IoU loss of neural networks based on a convex Lovasz extension of submodular losses. Lovasz loss can be divided into two types according to the number of categories of segmentation targets: lovasz hinge loss and lovasz softmax loss. Among them, lovasz hinge loss is suitable for binary classification problems, and lovasz softmax loss is suitable for multi-classification problems. This work was published on CVPR 2018, and you can click on the references to view the specific principles.

It should be noted that the usual direct training methods do not necessarily work. PaddleSeg recommends two other training methods:

  • (1) Combined with cross entropy loss or bce loss (binary cross-entropy loss) weighting.
  • (2) Use cross entropy loss or bce loss for training first, and then use lovasz softmax loss or lovasz hinge loss for finetuning. The weight ratio of different losses is carried out through coefparameters, so that training parameters can be flexibly adjusted.
loss:
  types:
    - type: MixedLoss
      losses:
        - type: CrossEntropyLoss
        - type: LovaszSoftmaxLoss
      coef: [0.8, 0.2]
    - type: MixedLoss
      losses:
        - type: CrossEntropyLoss
        - type: LovaszSoftmaxLoss
      coef: [0.8, 0.2]
  coef: [1, 0.4]

SOLVER:
    LR: 0.005
    LR_POLICY: "poly"
    OPTIMIZER: "sgd"
    NUM_EPOCHS: 40
    CROSS_ENTROPY_WEIGHT: dynamic
In [8]
%cd PaddleSeg
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">/home/aistudio/PaddleSeg
</span></span>

train

In [29]
!python train.py \
       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \
       --do_eval \
       --use_vdl \
       --save_interval 1000 \
       --save_dir output

 

In [18]
!python train.py \
       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \
       --resume_model output/iter_6000 \
       --do_eval \
       --use_vdl \
       --save_interval 1000 \
       --save_dir output
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/setuptools/depends.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
2021-04-08 10:13:26 [INFO]	
------------Environment Information-------------
platform: Linux-4.4.0-150-generic-x86_64-with-debian-stretch-sid
Python: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
Paddle compiled with cuda: True
NVCC: Cuda compilation tools, release 10.1, V10.1.243
cudnn: 7.6
GPUs used: 1
CUDA_VISIBLE_DEVICES: 0
GPU: ['GPU 0: Tesla V100-SXM2-32GB']
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0
PaddlePaddle: 2.0.1
OpenCV: 4.1.1
------------------------------------------------
2021-04-08 10:13:26 [INFO]	
---------------Config Information---------------
SOLVER:
  CROSS_ENTROPY_WEIGHT: dynamic
  LR: 0.005
  LR_POLICY: poly
  NUM_EPOCHS: 40
  OPTIMIZER: sgd
batch_size: 4
iters: 35000
learning_rate:
  decay:
    end_lr: 0.0
    power: 0.9
    type: poly
  value: 0.0025
loss:
  coef:
  - 1
  - 0.4
  types:
  - coef:
    - 0.8
    - 0.2
    losses:
    - type: CrossEntropyLoss
    - type: LovaszSoftmaxLoss
    type: MixedLoss
  - coef:
    - 0.8
    - 0.2
    losses:
    - type: CrossEntropyLoss
    - type: LovaszSoftmaxLoss
    type: MixedLoss
model:
  backbone:
    pretrained: https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
    type: HRNet_W18
  backbone_indices:
  - 0
  type: OCRNet
optimizer:
  momentum: 0.9
  type: sgd
  weight_decay: 4.0e-05
train_dataset:
  dataset_root: /home/aistudio/
  mode: train
  num_classes: 15
  train_path: /home/aistudio/train_list.txt
  transforms:
  - max_scale_factor: 2.0
    min_scale_factor: 0.5
    scale_step_size: 0.25
    type: ResizeStepScaling
  - max_rotation: 30
    type: RandomRotation
  - type: RandomHorizontalFlip
  - type: RandomVerticalFlip
  - crop_size:
    - 1024
    - 512
    type: RandomPaddingCrop
  - type: RandomBlur
  - brightness_range: 0.4
    contrast_range: 0.4
    saturation_range: 0.4
    type: RandomDistort
  - type: Normalize
  type: Dataset
val_dataset:
  dataset_root: /home/aistudio/
  mode: val
  num_classes: 15
  transforms:
  - type: Normalize
  type: Dataset
  val_path: /home/aistudio/val_list.txt
------------------------------------------------
W0408 10:13:26.278049 10398 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0408 10:13:26.278107 10398 device_context.cc:372] device: 0, cuDNN Version: 7.6.
2021-04-08 10:13:31 [INFO]	Loading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
2021-04-08 10:13:31,306 - INFO - Lock 140184385978832 acquired on /home/aistudio/.paddleseg/tmp/hrnet_w18_ssld
2021-04-08 10:13:31,307 - INFO - Lock 140184385978832 released on /home/aistudio/.paddleseg/tmp/hrnet_w18_ssld
2021-04-08 10:13:32 [INFO]	There are 1525/1525 variables loaded into HRNet.
2021-04-08 10:13:32 [INFO]	Resume model from output/iter_6000
2021-04-08 10:13:45 [INFO]	[TRAIN] epoch=7, iter=6010/35000, loss=0.2093, lr=0.002110, batch_cost=1.1784, reader_cost=0.10351, ips=3.3943 samples/sec | ETA 09:29:23
2021-04-08 10:13:55 [INFO]	[TRAIN] epoch=7, iter=6020/35000, loss=0.2493, lr=0.002110, batch_cost=0.9857, reader_cost=0.00008, ips=4.0580 samples/sec | ETA 07:56:05
2021-04-08 10:14:05 [INFO]	[TRAIN] epoch=7, iter=6030/35000, loss=0.2214, lr=0.002109, batch_cost=1.0455, reader_cost=0.00009, ips=3.8259 samples/sec | ETA 08:24:48
2021-04-08 10:14:16 [INFO]	[TRAIN] epoch=7, iter=6040/35000, loss=0.2016, lr=0.002108, batch_cost=1.0664, reader_cost=0.00008, ips=3.7509 samples/sec | ETA 08:34:43
2021-04-08 10:14:27 [INFO]	[TRAIN] epoch=7, iter=6050/35000, loss=0.2157, lr=0.002108, batch_cost=1.0611, reader_cost=0.00009, ips=3.7695 samples/sec | ETA 08:32:00
2021-04-08 10:14:38 [INFO]	[TRAIN] epoch=7, iter=6060/35000, loss=0.1924, lr=0.002107, batch_cost=1.1031, reader_cost=0.00012, ips=3.6261 samples/sec | ETA 08:52:03
2021-04-08 10:14:48 [INFO]	[TRAIN] epoch=7, iter=6070/35000, loss=0.2162, lr=0.002106, batch_cost=1.0151, reader_cost=0.00010, ips=3.9404 samples/sec | ETA 08:09:27
2021-04-08 10:14:58 [INFO]	[TRAIN] epoch=7, iter=6080/35000, loss=0.1959, lr=0.002106, batch_cost=1.0447, reader_cost=0.00010, ips=3.8288 samples/sec | ETA 08:23:32
2021-04-08 10:15:09 [INFO]	[TRAIN] epoch=7, iter=6090/35000, loss=0.2184, lr=0.002105, batch_cost=1.0520, reader_cost=0.00010, ips=3.8022 samples/sec | ETA 08:26:53
2021-04-08 10:15:20 [INFO]	[TRAIN] epoch=7, iter=6100/35000, loss=0.2238, lr=0.002104, batch_cost=1.1000, reader_cost=0.00008, ips=3.6363 samples/sec | ETA 08:49:50
2021-04-08 10:15:30 [INFO]	[TRAIN] epoch=7, iter=6110/35000, loss=0.2002, lr=0.002104, batch_cost=0.9978, reader_cost=0.00008, ips=4.0089 samples/sec | ETA 08:00:25
2021-04-08 10:15:41 [INFO]	[TRAIN] epoch=7, iter=6120/35000, loss=0.2047, lr=0.002103, batch_cost=1.0724, reader_cost=0.00008, ips=3.7300 samples/sec | ETA 08:36:10
2021-04-08 10:15:51 [INFO]	[TRAIN] epoch=8, iter=6130/35000, loss=0.2244, lr=0.002102, batch_cost=1.0544, reader_cost=0.00008, ips=3.7935 samples/sec | ETA 08:27:21
2021-04-08 10:16:02 [INFO]	[TRAIN] epoch=8, iter=6140/35000, loss=0.2038, lr=0.002102, batch_cost=1.0655, reader_cost=0.00011, ips=3.7541 samples/sec | ETA 08:32:30
2021-04-08 10:16:13 [INFO]	[TRAIN] epoch=8, iter=6150/35000, loss=0.2257, lr=0.002101, batch_cost=1.0989, reader_cost=0.00010, ips=3.6400 samples/sec | ETA 08:48:23
2021-04-08 10:16:23 [INFO]	[TRAIN] epoch=8, iter=6160/35000, loss=0.2056, lr=0.002100, batch_cost=0.9964, reader_cost=0.00008, ips=4.0143 samples/sec | ETA 07:58:57
2021-04-08 10:16:33 [INFO]	[TRAIN] epoch=8, iter=6170/35000, loss=0.2248, lr=0.002100, batch_cost=1.0123, reader_cost=0.00010, ips=3.9515 samples/sec | ETA 08:06:23
2021-04-08 10:16:43 [INFO]	[TRAIN] epoch=8, iter=6180/35000, loss=0.2562, lr=0.002099, batch_cost=1.0206, reader_cost=0.00008, ips=3.9192 samples/sec | ETA 08:10:14
2021-04-08 10:16:53 [INFO]	[TRAIN] epoch=8, iter=6190/35000, loss=0.1847, lr=0.002098, batch_cost=1.0048, reader_cost=0.00008, ips=3.9809 samples/sec | ETA 08:02:28
2021-04-08 10:17:03 [INFO]	[TRAIN] epoch=8, iter=6200/35000, loss=0.2059, lr=0.002098, batch_cost=1.0068, reader_cost=0.00008, ips=3.9731 samples/sec | ETA 08:03:14
2021-04-08 10:17:13 [INFO]	[TRAIN] epoch=8, iter=6210/35000, loss=0.1989, lr=0.002097, batch_cost=1.0177, reader_cost=0.00008, ips=3.9304 samples/sec | ETA 08:08:19
2021-04-08 10:17:24 [INFO]	[TRAIN] epoch=8, iter=6220/35000, loss=0.2231, lr=0.002096, batch_cost=1.0990, reader_cost=0.00014, ips=3.6396 samples/sec | ETA 08:47:09
2021-04-08 10:17:35 [INFO]	[TRAIN] epoch=8, iter=6230/35000, loss=0.2176, lr=0.002096, batch_cost=1.1169, reader_cost=0.00010, ips=3.5813 samples/sec | ETA 08:55:33
2021-04-08 10:17:46 [INFO]	[TRAIN] epoch=8, iter=6240/35000, loss=0.2173, lr=0.002095, batch_cost=1.0865, reader_cost=0.00008, ips=3.6816 samples/sec | ETA 08:40:47
2021-04-08 10:17:57 [INFO]	[TRAIN] epoch=8, iter=6250/35000, loss=0.1872, lr=0.002094, batch_cost=1.0429, reader_cost=0.00008, ips=3.8353 samples/sec | ETA 08:19:44
2021-04-08 10:18:08 [INFO]	[TRAIN] epoch=8, iter=6260/35000, loss=0.2105, lr=0.002094, batch_cost=1.1581, reader_cost=0.00010, ips=3.4539 samples/sec | ETA 09:14:44
2021-04-08 10:18:18 [INFO]	[TRAIN] epoch=8, iter=6270/35000, loss=0.1888, lr=0.002093, batch_cost=0.9761, reader_cost=0.00008, ips=4.0978 samples/sec | ETA 07:47:24
2021-04-08 10:18:28 [INFO]	[TRAIN] epoch=8, iter=6280/35000, loss=0.2201, lr=0.002092, batch_cost=0.9710, reader_cost=0.00008, ips=4.1197 samples/sec | ETA 07:44:45
2021-04-08 10:18:39 [INFO]	[TRAIN] epoch=8, iter=6290/35000, loss=0.2116, lr=0.002092, batch_cost=1.0923, reader_cost=0.00008, ips=3.6618 samples/sec | ETA 08:42:41
2021-04-08 10:18:49 [INFO]	[TRAIN] epoch=8, iter=6300/35000, loss=0.2101, lr=0.002091, batch_cost=1.0442, reader_cost=0.00008, ips=3.8306 samples/sec | ETA 08:19:29
2021-04-08 10:19:00 [INFO]	[TRAIN] epoch=8, iter=6310/35000, loss=0.2121, lr=0.002090, batch_cost=1.0931, reader_cost=0.00008, ips=3.6594 samples/sec | ETA 08:42:40
2021-04-08 10:19:10 [INFO]	[TRAIN] epoch=8, iter=6320/35000, loss=0.2183, lr=0.002090, batch_cost=1.0179, reader_cost=0.00008, ips=3.9298 samples/sec | ETA 08:06:32
2021-04-08 10:19:21 [INFO]	[TRAIN] epoch=8, iter=6330/35000, loss=0.2113, lr=0.002089, batch_cost=1.0378, reader_cost=0.00008, ips=3.8544 samples/sec | ETA 08:15:52
2021-04-08 10:19:31 [INFO]	[TRAIN] epoch=8, iter=6340/35000, loss=0.2115, lr=0.002089, batch_cost=1.0446, reader_cost=0.00011, ips=3.8293 samples/sec | ETA 08:18:57
2021-04-08 10:19:42 [INFO]	[TRAIN] epoch=8, iter=6350/35000, loss=0.2100, lr=0.002088, batch_cost=1.1134, reader_cost=0.00012, ips=3.5925 samples/sec | ETA 08:51:39
2021-04-08 10:19:53 [INFO]	[TRAIN] epoch=8, iter=6360/35000, loss=0.1888, lr=0.002087, batch_cost=1.0259, reader_cost=0.00008, ips=3.8989 samples/sec | ETA 08:09:42
2021-04-08 10:20:03 [INFO]	[TRAIN] epoch=8, iter=6370/35000, loss=0.2396, lr=0.002087, batch_cost=1.0202, reader_cost=0.00008, ips=3.9208 samples/sec | ETA 08:06:48
2021-04-08 10:20:13 [INFO]	[TRAIN] epoch=8, iter=6380/35000, loss=0.2279, lr=0.002086, batch_cost=1.0538, reader_cost=0.00009, ips=3.7957 samples/sec | ETA 08:22:40
2021-04-08 10:20:24 [INFO]	[TRAIN] epoch=8, iter=6390/35000, loss=0.2020, lr=0.002085, batch_cost=1.0387, reader_cost=0.00009, ips=3.8509 samples/sec | ETA 08:15:17
2021-04-08 10:20:35 [INFO]	[TRAIN] epoch=8, iter=6400/35000, loss=0.1952, lr=0.002085, batch_cost=1.1040, reader_cost=0.00011, ips=3.6231 samples/sec | ETA 08:46:14
2021-04-08 10:20:45 [INFO]	[TRAIN] epoch=8, iter=6410/35000, loss=0.1740, lr=0.002084, batch_cost=1.0441, reader_cost=0.00014, ips=3.8310 samples/sec | ETA 08:17:31
2021-04-08 10:20:55 [INFO]	[TRAIN] epoch=8, iter=6420/35000, loss=0.2108, lr=0.002083, batch_cost=1.0202, reader_cost=0.00010, ips=3.9208 samples/sec | ETA 08:05:57
2021-04-08 10:21:06 [INFO]	[TRAIN] epoch=8, iter=6430/35000, loss=0.2581, lr=0.002083, batch_cost=1.0691, reader_cost=0.00010, ips=3.7415 samples/sec | ETA 08:29:03
2021-04-08 10:21:17 [INFO]	[TRAIN] epoch=8, iter=6440/35000, loss=0.1802, lr=0.002082, batch_cost=1.1262, reader_cost=0.00014, ips=3.5517 samples/sec | ETA 08:56:05
2021-04-08 10:21:29 [INFO]	[TRAIN] epoch=8, iter=6450/35000, loss=0.2011, lr=0.002081, batch_cost=1.1370, reader_cost=0.00015, ips=3.5181 samples/sec | ETA 09:01:00
2021-04-08 10:21:41 [INFO]	[TRAIN] epoch=8, iter=6460/35000, loss=0.1804, lr=0.002081, batch_cost=1.1968, reader_cost=0.00013, ips=3.3421 samples/sec | ETA 09:29:17
2021-04-08 10:21:51 [INFO]	[TRAIN] epoch=8, iter=6470/35000, loss=0.1894, lr=0.002080, batch_cost=1.0739, reader_cost=0.00011, ips=3.7249 samples/sec | ETA 08:30:37
2021-04-08 10:22:02 [INFO]	[TRAIN] epoch=8, iter=6480/35000, loss=0.2016, lr=0.002079, batch_cost=1.0634, reader_cost=0.00009, ips=3.7615 samples/sec | ETA 08:25:28
2021-04-08 10:22:12 [INFO]	[TRAIN] epoch=8, iter=6490/35000, loss=0.1693, lr=0.002079, batch_cost=0.9690, reader_cost=0.00009, ips=4.1280 samples/sec | ETA 07:40:25
2021-04-08 10:22:22 [INFO]	[TRAIN] epoch=8, iter=6500/35000, loss=0.2311, lr=0.002078, batch_cost=1.0540, reader_cost=0.00009, ips=3.7952 samples/sec | ETA 08:20:38
2021-04-08 10:22:33 [INFO]	[TRAIN] epoch=8, iter=6510/35000, loss=0.2029, lr=0.002077, batch_cost=1.0669, reader_cost=0.00009, ips=3.7490 samples/sec | ETA 08:26:37
2021-04-08 10:22:43 [INFO]	[TRAIN] epoch=8, iter=6520/35000, loss=0.2145, lr=0.002077, batch_cost=1.0271, reader_cost=0.00010, ips=3.8944 samples/sec | ETA 08:07:32
2021-04-08 10:22:54 [INFO]	[TRAIN] epoch=8, iter=6530/35000, loss=0.2249, lr=0.002076, batch_cost=1.0502, reader_cost=0.00009, ips=3.8088 samples/sec | ETA 08:18:19
2021-04-08 10:23:04 [INFO]	[TRAIN] epoch=8, iter=6540/35000, loss=0.1861, lr=0.002075, batch_cost=1.0206, reader_cost=0.00009, ips=3.9194 samples/sec | ETA 08:04:05
2021-04-08 10:23:14 [INFO]	[TRAIN] epoch=8, iter=6550/35000, loss=0.2141, lr=0.002075, batch_cost=1.0540, reader_cost=0.00009, ips=3.7951 samples/sec | ETA 08:19:45
2021-04-08 10:23:26 [INFO]	[TRAIN] epoch=8, iter=6560/35000, loss=0.1925, lr=0.002074, batch_cost=1.1108, reader_cost=0.00008, ips=3.6009 samples/sec | ETA 08:46:32
2021-04-08 10:23:36 [INFO]	[TRAIN] epoch=8, iter=6570/35000, loss=0.2098, lr=0.002073, batch_cost=1.0847, reader_cost=0.00010, ips=3.6878 samples/sec | ETA 08:33:57
2021-04-08 10:23:47 [INFO]	[TRAIN] epoch=8, iter=6580/35000, loss=0.1465, lr=0.002073, batch_cost=1.0294, reader_cost=0.00008, ips=3.8857 samples/sec | ETA 08:07:36
2021-04-08 10:23:57 [INFO]	[TRAIN] epoch=8, iter=6590/35000, loss=0.2230, lr=0.002072, batch_cost=1.0131, reader_cost=0.00008, ips=3.9484 samples/sec | ETA 07:59:41
2021-04-08 10:24:07 [INFO]	[TRAIN] epoch=8, iter=6600/35000, loss=0.1702, lr=0.002071, batch_cost=1.0057, reader_cost=0.00008, ips=3.9773 samples/sec | ETA 07:56:01
2021-04-08 10:24:18 [INFO]	[TRAIN] epoch=8, iter=6610/35000, loss=0.2132, lr=0.002071, batch_cost=1.0792, reader_cost=0.00009, ips=3.7065 samples/sec | ETA 08:30:37
2021-04-08 10:24:28 [INFO]	[TRAIN] epoch=8, iter=6620/35000, loss=0.1934, lr=0.002070, batch_cost=0.9973, reader_cost=0.00008, ips=4.0108 samples/sec | ETA 07:51:43
2021-04-08 10:24:38 [INFO]	[TRAIN] epoch=8, iter=6630/35000, loss=0.2026, lr=0.002070, batch_cost=1.0194, reader_cost=0.00008, ips=3.9241 samples/sec | ETA 08:01:58
2021-04-08 10:24:48 [INFO]	[TRAIN] epoch=8, iter=6640/35000, loss=0.1923, lr=0.002069, batch_cost=1.0225, reader_cost=0.00008, ips=3.9121 samples/sec | ETA 08:03:16
2021-04-08 10:24:59 [INFO]	[TRAIN] epoch=8, iter=6650/35000, loss=0.1979, lr=0.002068, batch_cost=1.0857, reader_cost=0.00010, ips=3.6843 samples/sec | ETA 08:32:59
2021-04-08 10:25:09 [INFO]	[TRAIN] epoch=8, iter=6660/35000, loss=0.2122, lr=0.002068, batch_cost=0.9929, reader_cost=0.00010, ips=4.0284 samples/sec | ETA 07:49:00
2021-04-08 10:25:20 [INFO]	[TRAIN] epoch=8, iter=6670/35000, loss=0.2092, lr=0.002067, batch_cost=1.0789, reader_cost=0.00010, ips=3.7073 samples/sec | ETA 08:29:26
2021-04-08 10:25:30 [INFO]	[TRAIN] epoch=8, iter=6680/35000, loss=0.2578, lr=0.002066, batch_cost=1.0312, reader_cost=0.00011, ips=3.8791 samples/sec | ETA 08:06:42
2021-04-08 10:25:40 [INFO]	[TRAIN] epoch=8, iter=6690/35000, loss=0.2238, lr=0.002066, batch_cost=1.0484, reader_cost=0.00010, ips=3.8154 samples/sec | ETA 08:14:39
2021-04-08 10:25:52 [INFO]	[TRAIN] epoch=8, iter=6700/35000, loss=0.1982, lr=0.002065, batch_cost=1.1295, reader_cost=0.00012, ips=3.5415 samples/sec | ETA 08:52:44
2021-04-08 10:26:02 [INFO]	[TRAIN] epoch=8, iter=6710/35000, loss=0.2036, lr=0.002064, batch_cost=1.0453, reader_cost=0.00010, ips=3.8267 samples/sec | ETA 08:12:51
2021-04-08 10:26:13 [INFO]	[TRAIN] epoch=8, iter=6720/35000, loss=0.2155, lr=0.002064, batch_cost=1.0411, reader_cost=0.00012, ips=3.8421 samples/sec | ETA 08:10:42
2021-04-08 10:26:23 [INFO]	[TRAIN] epoch=8, iter=6730/35000, loss=0.2365, lr=0.002063, batch_cost=1.0675, reader_cost=0.00010, ips=3.7470 samples/sec | ETA 08:22:58
2021-04-08 10:26:34 [INFO]	[TRAIN] epoch=8, iter=6740/35000, loss=0.2238, lr=0.002062, batch_cost=1.0999, reader_cost=0.00012, ips=3.6369 samples/sec | ETA 08:38:01
2021-04-08 10:26:45 [INFO]	[TRAIN] epoch=8, iter=6750/35000, loss=0.2034, lr=0.002062, batch_cost=1.0843, reader_cost=0.00013, ips=3.6891 samples/sec | ETA 08:30:30
2021-04-08 10:26:56 [INFO]	[TRAIN] epoch=8, iter=6760/35000, loss=0.1962, lr=0.002061, batch_cost=1.1255, reader_cost=0.00013, ips=3.5540 samples/sec | ETA 08:49:43
2021-04-08 10:27:07 [INFO]	[TRAIN] epoch=8, iter=6770/35000, loss=0.2059, lr=0.002060, batch_cost=1.0386, reader_cost=0.00009, ips=3.8514 samples/sec | ETA 08:08:39
2021-04-08 10:27:17 [INFO]	[TRAIN] epoch=8, iter=6780/35000, loss=0.3749, lr=0.002060, batch_cost=1.0281, reader_cost=0.00009, ips=3.8907 samples/sec | ETA 08:03:32
2021-04-08 10:27:27 [INFO]	[TRAIN] epoch=8, iter=6790/35000, loss=0.2470, lr=0.002059, batch_cost=1.0143, reader_cost=0.00009, ips=3.9438 samples/sec | ETA 07:56:52
2021-04-08 10:27:39 [INFO]	[TRAIN] epoch=8, iter=6800/35000, loss=0.1812, lr=0.002058, batch_cost=1.1326, reader_cost=0.00014, ips=3.5316 samples/sec | ETA 08:52:19
2021-04-08 10:27:50 [INFO]	[TRAIN] epoch=8, iter=6810/35000, loss=0.2375, lr=0.002058, batch_cost=1.1320, reader_cost=0.00010, ips=3.5337 samples/sec | ETA 08:51:50
2021-04-08 10:28:01 [INFO]	[TRAIN] epoch=8, iter=6820/35000, loss=0.2120, lr=0.002057, batch_cost=1.0803, reader_cost=0.00009, ips=3.7027 samples/sec | ETA 08:27:23
2021-04-08 10:28:11 [INFO]	[TRAIN] epoch=8, iter=6830/35000, loss=0.2117, lr=0.002056, batch_cost=1.0676, reader_cost=0.00009, ips=3.7465 samples/sec | ETA 08:21:15
2021-04-08 10:28:22 [INFO]	[TRAIN] epoch=8, iter=6840/35000, loss=0.2029, lr=0.002056, batch_cost=1.0229, reader_cost=0.00008, ips=3.9105 samples/sec | ETA 08:00:04
2021-04-08 10:28:32 [INFO]	[TRAIN] epoch=8, iter=6850/35000, loss=0.2184, lr=0.002055, batch_cost=1.0579, reader_cost=0.00008, ips=3.7812 samples/sec | ETA 08:16:18
2021-04-08 10:28:42 [INFO]	[TRAIN] epoch=8, iter=6860/35000, loss=0.1779, lr=0.002054, batch_cost=0.9809, reader_cost=0.00008, ips=4.0778 samples/sec | ETA 07:40:03
2021-04-08 10:28:52 [INFO]	[TRAIN] epoch=8, iter=6870/35000, loss=0.2097, lr=0.002054, batch_cost=1.0308, reader_cost=0.00009, ips=3.8807 samples/sec | ETA 08:03:15
2021-04-08 10:29:03 [INFO]	[TRAIN] epoch=8, iter=6880/35000, loss=0.2192, lr=0.002053, batch_cost=1.0967, reader_cost=0.06607, ips=3.6474 samples/sec | ETA 08:33:58
2021-04-08 10:29:14 [INFO]	[TRAIN] epoch=8, iter=6890/35000, loss=0.2067, lr=0.002052, batch_cost=1.1184, reader_cost=0.00012, ips=3.5764 samples/sec | ETA 08:43:59
2021-04-08 10:29:25 [INFO]	[TRAIN] epoch=8, iter=6900/35000, loss=0.2314, lr=0.002052, batch_cost=1.0439, reader_cost=0.00008, ips=3.8317 samples/sec | ETA 08:08:53
2021-04-08 10:29:35 [INFO]	[TRAIN] epoch=8, iter=6910/35000, loss=0.1971, lr=0.002051, batch_cost=1.0595, reader_cost=0.00011, ips=3.7755 samples/sec | ETA 08:16:00
2021-04-08 10:29:47 [INFO]	[TRAIN] epoch=8, iter=6920/35000, loss=0.2354, lr=0.002050, batch_cost=1.1289, reader_cost=0.00012, ips=3.5433 samples/sec | ETA 08:48:19
2021-04-08 10:29:57 [INFO]	[TRAIN] epoch=8, iter=6930/35000, loss=0.2110, lr=0.002050, batch_cost=1.0456, reader_cost=0.00009, ips=3.8257 samples/sec | ETA 08:09:08
2021-04-08 10:30:08 [INFO]	[TRAIN] epoch=8, iter=6940/35000, loss=0.2220, lr=0.002049, batch_cost=1.0347, reader_cost=0.00008, ips=3.8660 samples/sec | ETA 08:03:52
2021-04-08 10:30:18 [INFO]	[TRAIN] epoch=8, iter=6950/35000, loss=0.2090, lr=0.002048, batch_cost=1.0788, reader_cost=0.00011, ips=3.7078 samples/sec | ETA 08:24:20
2021-04-08 10:30:29 [INFO]	[TRAIN] epoch=8, iter=6960/35000, loss=0.2089, lr=0.002048, batch_cost=1.0996, reader_cost=0.00012, ips=3.6376 samples/sec | ETA 08:33:53
2021-04-08 10:30:40 [INFO]	[TRAIN] epoch=8, iter=6970/35000, loss=0.2111, lr=0.002047, batch_cost=1.0604, reader_cost=0.00011, ips=3.7721 samples/sec | ETA 08:15:23
2021-04-08 10:30:51 [INFO]	[TRAIN] epoch=8, iter=6980/35000, loss=0.1905, lr=0.002047, batch_cost=1.0727, reader_cost=0.00010, ips=3.7288 samples/sec | ETA 08:20:58
2021-04-08 10:31:00 [INFO]	[TRAIN] epoch=8, iter=6990/35000, loss=0.1796, lr=0.002046, batch_cost=0.9592, reader_cost=0.00008, ips=4.1703 samples/sec | ETA 07:27:45
2021-04-08 10:31:11 [INFO]	[TRAIN] epoch=8, iter=7000/35000, loss=0.2226, lr=0.002045, batch_cost=1.0320, reader_cost=0.00008, ips=3.8759 samples/sec | ETA 08:01:36
2021-04-08 10:31:11 [INFO]	Start evaluating (total_samples=500, total_iters=500)...
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.INT32, but right dtype is VarType.BOOL, the right dtype will convert to VarType.INT32
  format(lhs_dtype, rhs_dtype, lhs_dtype))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.INT64, but right dtype is VarType.BOOL, the right dtype will convert to VarType.INT64
  format(lhs_dtype, rhs_dtype, lhs_dtype))
500/500 [==============================] - 103s 206ms/step - batch_cost: 0.2054 - reader cost: 0.001
2021-04-08 10:32:54 [INFO]	[EVAL] #Images=500 mIoU=0.1979 Acc=0.9857 Kappa=0.5681 
2021-04-08 10:32:54 [INFO]	[EVAL] Class IoU: 
[0.9872 0.2389 0.4774 0.472  0.154  0.2929 0.3465 0.     0.     0.
 0.     0.     0.     0.     0.    ]
2021-04-08 10:32:54 [INFO]	[EVAL] Class Acc: 
[0.9937 0.3976 0.6139 0.5784 0.2579 0.399  0.4729 0.     0.     0.
 0.     0.     0.     0.     0.    ]
2021-04-08 10:32:56 [INFO]	[EVAL] The model with the best validation mIoU (0.1979) was saved at iter 7000.
2021-04-08 10:33:07 [INFO]	[TRAIN] epoch=9, iter=7010/35000, loss=0.1783, lr=0.002045, batch_cost=1.0728, reader_cost=0.00012, ips=3.7286 samples/sec | ETA 08:20:27
2021-04-08 10:33:17 [INFO]	[TRAIN] epoch=9, iter=7020/35000, loss=0.2172, lr=0.002044, batch_cost=1.0500, reader_cost=0.00013, ips=3.8095 samples/sec | ETA 08:09:39
2021-04-08 10:33:28 [INFO]	[TRAIN] epoch=9, iter=7030/35000, loss=0.2276, lr=0.002043, batch_cost=1.1482, reader_cost=0.00011, ips=3.4838 samples/sec | ETA 08:55:14
2021-04-08 10:33:39 [INFO]	[TRAIN] epoch=9, iter=7040/35000, loss=0.2135, lr=0.002043, batch_cost=1.0231, reader_cost=0.00009, ips=3.9096 samples/sec | ETA 07:56:46
2021-04-08 10:33:50 [INFO]	[TRAIN] epoch=9, iter=7050/35000, loss=0.2092, lr=0.002042, batch_cost=1.1049, reader_cost=0.00014, ips=3.6201 samples/sec | ETA 08:34:42
2021-04-08 10:34:00 [INFO]	[TRAIN] epoch=9, iter=7060/35000, loss=0.1946, lr=0.002041, batch_cost=1.0493, reader_cost=0.00009, ips=3.8121 samples/sec | ETA 08:08:36
2021-04-08 10:34:11 [INFO]	[TRAIN] epoch=9, iter=7070/35000, loss=0.2025, lr=0.002041, batch_cost=1.0962, reader_cost=0.00013, ips=3.6489 samples/sec | ETA 08:30:17
2021-04-08 10:34:21 [INFO]	[TRAIN] epoch=9, iter=7080/35000, loss=0.1925, lr=0.002040, batch_cost=0.9794, reader_cost=0.00008, ips=4.0842 samples/sec | ETA 07:35:44
2021-04-08 10:34:32 [INFO]	[TRAIN] epoch=9, iter=7090/35000, loss=0.1994, lr=0.002039, batch_cost=1.0704, reader_cost=0.00008, ips=3.7370 samples/sec | ETA 08:17:54
2021-04-08 10:34:42 [INFO]	[TRAIN] epoch=9, iter=7100/35000, loss=0.2191, lr=0.002039, batch_cost=1.0315, reader_cost=0.00009, ips=3.8778 samples/sec | ETA 07:59:39
2021-04-08 10:34:52 [INFO]	[TRAIN] epoch=9, iter=7110/35000, loss=0.2162, lr=0.002038, batch_cost=0.9734, reader_cost=0.00008, ips=4.1094 samples/sec | ETA 07:32:27
2021-04-08 10:35:02 [INFO]	[TRAIN] epoch=9, iter=7120/35000, loss=0.1996, lr=0.002037, batch_cost=1.0414, reader_cost=0.00008, ips=3.8411 samples/sec | ETA 08:03:53
2021-04-08 10:35:13 [INFO]	[TRAIN] epoch=9, iter=7130/35000, loss=0.2196, lr=0.002037, batch_cost=1.0631, reader_cost=0.00008, ips=3.7626 samples/sec | ETA 08:13:48
2021-04-08 10:35:24 [INFO]	[TRAIN] epoch=9, iter=7140/35000, loss=0.2188, lr=0.002036, batch_cost=1.0844, reader_cost=0.00010, ips=3.6888 samples/sec | ETA 08:23:30
2021-04-08 10:35:34 [INFO]	[TRAIN] epoch=9, iter=7150/35000, loss=0.2410, lr=0.002035, batch_cost=1.0702, reader_cost=0.00010, ips=3.7375 samples/sec | ETA 08:16:46
2021-04-08 10:35:45 [INFO]	[TRAIN] epoch=9, iter=7160/35000, loss=0.2094, lr=0.002035, batch_cost=1.0379, reader_cost=0.00010, ips=3.8538 samples/sec | ETA 08:01:35
2021-04-08 10:35:55 [INFO]	[TRAIN] epoch=9, iter=7170/35000, loss=0.2016, lr=0.002034, batch_cost=1.0376, reader_cost=0.00009, ips=3.8552 samples/sec | ETA 08:01:15
2021-04-08 10:36:06 [INFO]	[TRAIN] epoch=9, iter=7180/35000, loss=0.2105, lr=0.002033, batch_cost=1.0601, reader_cost=0.00009, ips=3.7734 samples/sec | ETA 08:11:30
2021-04-08 10:36:17 [INFO]	[TRAIN] epoch=9, iter=7190/35000, loss=0.2069, lr=0.002033, batch_cost=1.0858, reader_cost=0.00009, ips=3.6839 samples/sec | ETA 08:23:16
2021-04-08 10:36:27 [INFO]	[TRAIN] epoch=9, iter=7200/35000, loss=0.2015, lr=0.002032, batch_cost=1.0550, reader_cost=0.00009, ips=3.7916 samples/sec | ETA 08:08:47
2021-04-08 10:36:37 [INFO]	[TRAIN] epoch=9, iter=7210/35000, loss=0.1748, lr=0.002031, batch_cost=0.9704, reader_cost=0.00009, ips=4.1220 samples/sec | ETA 07:29:27
2021-04-08 10:36:48 [INFO]	[TRAIN] epoch=9, iter=7220/35000, loss=0.1989, lr=0.002031, batch_cost=1.0699, reader_cost=0.00010, ips=3.7386 samples/sec | ETA 08:15:22
2021-04-08 10:36:58 [INFO]	[TRAIN] epoch=9, iter=7230/35000, loss=0.2276, lr=0.002030, batch_cost=1.0851, reader_cost=0.00014, ips=3.6862 samples/sec | ETA 08:22:14
2021-04-08 10:37:09 [INFO]	[TRAIN] epoch=9, iter=7240/35000, loss=0.2133, lr=0.002029, batch_cost=1.0206, reader_cost=0.00010, ips=3.9192 samples/sec | ETA 07:52:12
2021-04-08 10:37:19 [INFO]	[TRAIN] epoch=9, iter=7250/35000, loss=0.2121, lr=0.002029, batch_cost=1.0623, reader_cost=0.00008, ips=3.7655 samples/sec | ETA 08:11:17
2021-04-08 10:37:29 [INFO]	[TRAIN] epoch=9, iter=7260/35000, loss=0.2147, lr=0.002028, batch_cost=1.0064, reader_cost=0.00008, ips=3.9747 samples/sec | ETA 07:45:16
2021-04-08 10:37:39 [INFO]	[TRAIN] epoch=9, iter=7270/35000, loss=0.2258, lr=0.002027, batch_cost=0.9983, reader_cost=0.00008, ips=4.0069 samples/sec | ETA 07:41:22
2021-04-08 10:37:50 [INFO]	[TRAIN] epoch=9, iter=7280/35000, loss=0.1994, lr=0.002027, batch_cost=1.0797, reader_cost=0.00011, ips=3.7048 samples/sec | ETA 08:18:48
2021-04-08 10:38:01 [INFO]	[TRAIN] epoch=9, iter=7290/35000, loss=0.1831, lr=0.002026, batch_cost=1.1136, reader_cost=0.00013, ips=3.5918 samples/sec | ETA 08:34:18
2021-04-08 10:38:12 [INFO]	[TRAIN] epoch=9, iter=7300/35000, loss=0.2439, lr=0.002025, batch_cost=1.1283, reader_cost=0.00009, ips=3.5452 samples/sec | ETA 08:40:53
2021-04-08 10:38:24 [INFO]	[TRAIN] epoch=9, iter=7310/35000, loss=0.2297, lr=0.002025, batch_cost=1.1026, reader_cost=0.00013, ips=3.6277 samples/sec | ETA 08:28:51
2021-04-08 10:38:34 [INFO]	[TRAIN] epoch=9, iter=7320/35000, loss=0.2291, lr=0.002024, batch_cost=1.0667, reader_cost=0.00011, ips=3.7500 samples/sec | ETA 08:12:05
2021-04-08 10:38:45 [INFO]	[TRAIN] epoch=9, iter=7330/35000, loss=0.2180, lr=0.002023, batch_cost=1.0527, reader_cost=0.00012, ips=3.7996 samples/sec | ETA 08:05:29
2021-04-08 10:38:55 [INFO]	[TRAIN] epoch=9, iter=7340/35000, loss=0.1709, lr=0.002023, batch_cost=1.0047, reader_cost=0.00009, ips=3.9813 samples/sec | ETA 07:43:10
2021-04-08 10:39:05 [INFO]	[TRAIN] epoch=9, iter=7350/35000, loss=0.2211, lr=0.002022, batch_cost=1.0589, reader_cost=0.00008, ips=3.7777 samples/sec | ETA 08:07:57
2021-04-08 10:39:17 [INFO]	[TRAIN] epoch=9, iter=7360/35000, loss=0.2350, lr=0.002022, batch_cost=1.1764, reader_cost=0.00014, ips=3.4001 samples/sec | ETA 09:01:56
2021-04-08 10:39:28 [INFO]	[TRAIN] epoch=9, iter=7370/35000, loss=0.2058, lr=0.002021, batch_cost=1.1304, reader_cost=0.00013, ips=3.5385 samples/sec | ETA 08:40:33
2021-04-08 10:39:38 [INFO]	[TRAIN] epoch=9, iter=7380/35000, loss=0.2030, lr=0.002020, batch_cost=1.0050, reader_cost=0.00009, ips=3.9799 samples/sec | ETA 07:42:39
2021-04-08 10:39:50 [INFO]	[TRAIN] epoch=9, iter=7390/35000, loss=0.1859, lr=0.002020, batch_cost=1.1548, reader_cost=0.00014, ips=3.4639 samples/sec | ETA 08:51:23
2021-04-08 10:40:02 [INFO]	[TRAIN] epoch=9, iter=7400/35000, loss=0.2126, lr=0.002019, batch_cost=1.1655, reader_cost=0.00014, ips=3.4321 samples/sec | ETA 08:56:07
2021-04-08 10:40:12 [INFO]	[TRAIN] epoch=9, iter=7410/35000, loss=0.2128, lr=0.002018, batch_cost=1.0643, reader_cost=0.00009, ips=3.7585 samples/sec | ETA 08:09:23
2021-04-08 10:40:22 [INFO]	[TRAIN] epoch=9, iter=7420/35000, loss=0.2045, lr=0.002018, batch_cost=1.0039, reader_cost=0.00009, ips=3.9843 samples/sec | ETA 07:41:28
2021-04-08 10:40:33 [INFO]	[TRAIN] epoch=9, iter=7430/35000, loss=0.1948, lr=0.002017, batch_cost=1.0320, reader_cost=0.00009, ips=3.8758 samples/sec | ETA 07:54:13
2021-04-08 10:40:43 [INFO]	[TRAIN] epoch=9, iter=7440/35000, loss=0.2034, lr=0.002016, batch_cost=1.0771, reader_cost=0.00009, ips=3.7138 samples/sec | ETA 08:14:43
2021-04-08 10:40:54 [INFO]	[TRAIN] epoch=9, iter=7450/35000, loss=0.2285, lr=0.002016, batch_cost=1.1053, reader_cost=0.00009, ips=3.6189 samples/sec | ETA 08:27:30
2021-04-08 10:41:05 [INFO]	[TRAIN] epoch=9, iter=7460/35000, loss=0.1733, lr=0.002015, batch_cost=1.0539, reader_cost=0.00012, ips=3.7953 samples/sec | ETA 08:03:45
2021-04-08 10:41:16 [INFO]	[TRAIN] epoch=9, iter=7470/35000, loss=0.2410, lr=0.002014, batch_cost=1.0781, reader_cost=0.00008, ips=3.7103 samples/sec | ETA 08:14:39
2021-04-08 10:41:26 [INFO]	[TRAIN] epoch=9, iter=7480/35000, loss=0.2051, lr=0.002014, batch_cost=1.0267, reader_cost=0.00009, ips=3.8960 samples/sec | ETA 07:50:54
2021-04-08 10:41:37 [INFO]	[TRAIN] epoch=9, iter=7490/35000, loss=0.2213, lr=0.002013, batch_cost=1.0964, reader_cost=0.00014, ips=3.6482 samples/sec | ETA 08:22:42
2021-04-08 10:41:48 [INFO]	[TRAIN] epoch=9, iter=7500/35000, loss=0.2178, lr=0.002012, batch_cost=1.1163, reader_cost=0.00012, ips=3.5834 samples/sec | ETA 08:31:36
2021-04-08 10:41:59 [INFO]	[TRAIN] epoch=9, iter=7510/35000, loss=0.1996, lr=0.002012, batch_cost=1.0849, reader_cost=0.00012, ips=3.6869 samples/sec | ETA 08:17:04
2021-04-08 10:42:10 [INFO]	[TRAIN] epoch=9, iter=7520/35000, loss=0.2209, lr=0.002011, batch_cost=1.0595, reader_cost=0.00010, ips=3.7752 samples/sec | ETA 08:05:16
2021-04-08 10:42:20 [INFO]	[TRAIN] epoch=9, iter=7530/35000, loss=0.1771, lr=0.002010, batch_cost=1.0364, reader_cost=0.00008, ips=3.8595 samples/sec | ETA 07:54:30
2021-04-08 10:42:31 [INFO]	[TRAIN] epoch=9, iter=7540/35000, loss=0.1948, lr=0.002010, batch_cost=1.0658, reader_cost=0.00010, ips=3.7531 samples/sec | ETA 08:07:46
2021-04-08 10:42:41 [INFO]	[TRAIN] epoch=9, iter=7550/35000, loss=0.2044, lr=0.002009, batch_cost=1.0817, reader_cost=0.00008, ips=3.6979 samples/sec | ETA 08:14:52
2021-04-08 10:42:53 [INFO]	[TRAIN] epoch=9, iter=7560/35000, loss=0.2094, lr=0.002008, batch_cost=1.1292, reader_cost=0.00011, ips=3.5424 samples/sec | ETA 08:36:24
2021-04-08 10:43:03 [INFO]	[TRAIN] epoch=9, iter=7570/35000, loss=0.1898, lr=0.002008, batch_cost=1.0158, reader_cost=0.00012, ips=3.9377 samples/sec | ETA 07:44:24
2021-04-08 10:43:13 [INFO]	[TRAIN] epoch=9, iter=7580/35000, loss=0.2550, lr=0.002007, batch_cost=1.0497, reader_cost=0.00015, ips=3.8105 samples/sec | ETA 07:59:43
2021-04-08 10:43:23 [INFO]	[TRAIN] epoch=9, iter=7590/35000, loss=0.1830, lr=0.002006, batch_cost=0.9935, reader_cost=0.00008, ips=4.0261 samples/sec | ETA 07:33:52
2021-04-08 10:43:34 [INFO]	[TRAIN] epoch=9, iter=7600/35000, loss=0.1888, lr=0.002006, batch_cost=1.0526, reader_cost=0.00010, ips=3.8000 samples/sec | ETA 08:00:41
2021-04-08 10:43:45 [INFO]	[TRAIN] epoch=9, iter=7610/35000, loss=0.2366, lr=0.002005, batch_cost=1.0825, reader_cost=0.00011, ips=3.6952 samples/sec | ETA 08:14:09
2021-04-08 10:43:56 [INFO]	[TRAIN] epoch=9, iter=7620/35000, loss=0.2138, lr=0.002004, batch_cost=1.0789, reader_cost=0.00008, ips=3.7076 samples/sec | ETA 08:12:18
2021-04-08 10:44:06 [INFO]	[TRAIN] epoch=9, iter=7630/35000, loss=0.2137, lr=0.002004, batch_cost=1.0301, reader_cost=0.00010, ips=3.8831 samples/sec | ETA 07:49:53
2021-04-08 10:44:16 [INFO]	[TRAIN] epoch=9, iter=7640/35000, loss=0.2155, lr=0.002003, batch_cost=1.0623, reader_cost=0.00008, ips=3.7655 samples/sec | ETA 08:04:24
2021-04-08 10:44:28 [INFO]	[TRAIN] epoch=9, iter=7650/35000, loss=0.1630, lr=0.002002, batch_cost=1.1322, reader_cost=0.00012, ips=3.5331 samples/sec | ETA 08:36:04
2021-04-08 10:44:39 [INFO]	[TRAIN] epoch=9, iter=7660/35000, loss=0.1974, lr=0.002002, batch_cost=1.1523, reader_cost=0.00014, ips=3.4712 samples/sec | ETA 08:45:04
2021-04-08 10:44:50 [INFO]	[TRAIN] epoch=9, iter=7670/35000, loss=0.2032, lr=0.002001, batch_cost=1.0271, reader_cost=0.00011, ips=3.8944 samples/sec | ETA 07:47:50
2021-04-08 10:45:01 [INFO]	[TRAIN] epoch=9, iter=7680/35000, loss=0.2560, lr=0.002000, batch_cost=1.1139, reader_cost=0.00012, ips=3.5911 samples/sec | ETA 08:27:11
2021-04-08 10:45:11 [INFO]	[TRAIN] epoch=9, iter=7690/35000, loss=0.1690, lr=0.002000, batch_cost=1.0349, reader_cost=0.00013, ips=3.8651 samples/sec | ETA 07:51:03
2021-04-08 10:45:22 [INFO]	[TRAIN] epoch=9, iter=7700/35000, loss=0.1883, lr=0.001999, batch_cost=1.0980, reader_cost=0.00011, ips=3.6431 samples/sec | ETA 08:19:34
2021-04-08 10:45:33 [INFO]	[TRAIN] epoch=9, iter=7710/35000, loss=0.2164, lr=0.001998, batch_cost=1.0833, reader_cost=0.00009, ips=3.6923 samples/sec | ETA 08:12:44
2021-04-08 10:45:44 [INFO]	[TRAIN] epoch=9, iter=7720/35000, loss=0.2087, lr=0.001998, batch_cost=1.1057, reader_cost=0.00012, ips=3.6175 samples/sec | ETA 08:22:44
2021-04-08 10:45:55 [INFO]	[TRAIN] epoch=9, iter=7730/35000, loss=0.2024, lr=0.001997, batch_cost=1.1481, reader_cost=0.00014, ips=3.4839 samples/sec | ETA 08:41:49
2021-04-08 10:46:06 [INFO]	[TRAIN] epoch=9, iter=7740/35000, loss=0.2127, lr=0.001996, batch_cost=1.0996, reader_cost=0.00013, ips=3.6378 samples/sec | ETA 08:19:34
2021-04-08 10:46:16 [INFO]	[TRAIN] epoch=9, iter=7750/35000, loss=0.1886, lr=0.001996, batch_cost=0.9753, reader_cost=0.00014, ips=4.1013 samples/sec | ETA 07:22:56
2021-04-08 10:46:28 [INFO]	[TRAIN] epoch=9, iter=7760/35000, loss=0.1873, lr=0.001995, batch_cost=1.2101, reader_cost=0.07465, ips=3.3056 samples/sec | ETA 09:09:22
2021-04-08 10:46:38 [INFO]	[TRAIN] epoch=9, iter=7770/35000, loss=0.1910, lr=0.001995, batch_cost=1.0024, reader_cost=0.00008, ips=3.9906 samples/sec | ETA 07:34:54
2021-04-08 10:46:49 [INFO]	[TRAIN] epoch=9, iter=7780/35000, loss=0.2153, lr=0.001994, batch_cost=1.0494, reader_cost=0.00008, ips=3.8118 samples/sec | ETA 07:56:04
2021-04-08 10:47:00 [INFO]	[TRAIN] epoch=9, iter=7790/35000, loss=0.2261, lr=0.001993, batch_cost=1.0863, reader_cost=0.00008, ips=3.6823 samples/sec | ETA 08:12:37
2021-04-08 10:47:11 [INFO]	[TRAIN] epoch=9, iter=7800/35000, loss=0.2153, lr=0.001993, batch_cost=1.1479, reader_cost=0.00011, ips=3.4847 samples/sec | ETA 08:40:22
2021-04-08 10:47:22 [INFO]	[TRAIN] epoch=9, iter=7810/35000, loss=0.1830, lr=0.001992, batch_cost=1.0728, reader_cost=0.00011, ips=3.7287 samples/sec | ETA 08:06:08
2021-04-08 10:47:33 [INFO]	[TRAIN] epoch=9, iter=7820/35000, loss=0.1824, lr=0.001991, batch_cost=1.1096, reader_cost=0.00014, ips=3.6048 samples/sec | ETA 08:22:39
2021-04-08 10:47:45 [INFO]	[TRAIN] epoch=9, iter=7830/35000, loss=0.1868, lr=0.001991, batch_cost=1.1620, reader_cost=0.00013, ips=3.4424 samples/sec | ETA 08:46:10
2021-04-08 10:47:56 [INFO]	[TRAIN] epoch=9, iter=7840/35000, loss=0.2101, lr=0.001990, batch_cost=1.1051, reader_cost=0.00012, ips=3.6195 samples/sec | ETA 08:20:14
2021-04-08 10:48:06 [INFO]	[TRAIN] epoch=9, iter=7850/35000, loss=0.1703, lr=0.001989, batch_cost=1.0187, reader_cost=0.00010, ips=3.9266 samples/sec | ETA 07:40:57
2021-04-08 10:48:17 [INFO]	[TRAIN] epoch=9, iter=7860/35000, loss=0.2123, lr=0.001989, batch_cost=1.0715, reader_cost=0.00008, ips=3.7329 samples/sec | ETA 08:04:41
2021-04-08 10:48:27 [INFO]	[TRAIN] epoch=9, iter=7870/35000, loss=0.2546, lr=0.001988, batch_cost=1.0805, reader_cost=0.00009, ips=3.7021 samples/sec | ETA 08:08:33
2021-04-08 10:48:38 [INFO]	[TRAIN] epoch=10, iter=7880/35000, loss=0.2201, lr=0.001987, batch_cost=1.0298, reader_cost=0.00009, ips=3.8841 samples/sec | ETA 07:45:29
2021-04-08 10:48:48 [INFO]	[TRAIN] epoch=10, iter=7890/35000, loss=0.1705, lr=0.001987, batch_cost=1.0292, reader_cost=0.00009, ips=3.8866 samples/sec | ETA 07:45:00
2021-04-08 10:48:58 [INFO]	[TRAIN] epoch=10, iter=7900/35000, loss=0.1844, lr=0.001986, batch_cost=1.0092, reader_cost=0.00009, ips=3.9636 samples/sec | ETA 07:35:48
2021-04-08 10:49:08 [INFO]	[TRAIN] epoch=10, iter=7910/35000, loss=0.1921, lr=0.001985, batch_cost=1.0208, reader_cost=0.00009, ips=3.9185 samples/sec | ETA 07:40:53
2021-04-08 10:49:19 [INFO]	[TRAIN] epoch=10, iter=7920/35000, loss=0.1991, lr=0.001985, batch_cost=1.1182, reader_cost=0.00011, ips=3.5773 samples/sec | ETA 08:24:39
2021-04-08 10:49:30 [INFO]	[TRAIN] epoch=10, iter=7930/35000, loss=0.2251, lr=0.001984, batch_cost=1.0473, reader_cost=0.00008, ips=3.8194 samples/sec | ETA 07:52:29
2021-04-08 10:49:41 [INFO]	[TRAIN] epoch=10, iter=7940/35000, loss=0.1971, lr=0.001983, batch_cost=1.0838, reader_cost=0.00009, ips=3.6908 samples/sec | ETA 08:08:46
2021-04-08 10:49:51 [INFO]	[TRAIN] epoch=10, iter=7950/35000, loss=0.1989, lr=0.001983, batch_cost=1.0728, reader_cost=0.00013, ips=3.7285 samples/sec | ETA 08:03:40
2021-04-08 10:50:03 [INFO]	[TRAIN] epoch=10, iter=7960/35000, loss=0.2182, lr=0.001982, batch_cost=1.1091, reader_cost=0.00014, ips=3.6064 samples/sec | ETA 08:19:50
2021-04-08 10:50:13 [INFO]	[TRAIN] epoch=10, iter=7970/35000, loss=0.2061, lr=0.001981, batch_cost=1.0185, reader_cost=0.00010, ips=3.9275 samples/sec | ETA 07:38:48
2021-04-08 10:50:24 [INFO]	[TRAIN] epoch=10, iter=7980/35000, loss=0.2233, lr=0.001981, batch_cost=1.0997, reader_cost=0.00008, ips=3.6374 samples/sec | ETA 08:15:13
2021-04-08 10:50:35 [INFO]	[TRAIN] epoch=10, iter=7990/35000, loss=0.3235, lr=0.001980, batch_cost=1.1299, reader_cost=0.00011, ips=3.5400 samples/sec | ETA 08:28:39
2021-04-08 10:50:47 [INFO]	[TRAIN] epoch=10, iter=8000/35000, loss=0.2507, lr=0.001979, batch_cost=1.1588, reader_cost=0.00011, ips=3.4518 samples/sec | ETA 08:41:27
2021-04-08 10:50:47 [INFO]	Start evaluating (total_samples=500, total_iters=500)...
500/500 [==============================] - 100s 200ms/step - batch_cost: 0.1996 - reader cost: 7.0843e-0
2021-04-08 10:52:27 [INFO]	[EVAL] #Images=500 mIoU=0.1877 Acc=0.9878 Kappa=0.5409 
2021-04-08 10:52:27 [INFO]	[EVAL] Class IoU: 
[0.9889 0.238  0.4705 0.4509 0.117  0.2911 0.2587 0.     0.     0.
 0.     0.     0.     0.     0.    ]
2021-04-08 10:52:27 [INFO]	[EVAL] Class Acc: 
[0.9912 0.3593 0.721  0.5205 0.4217 0.4955 0.7894 0.     0.     0.
 0.     0.     0.     0.     0.    ]
2021-04-08 10:52:28 [INFO]	[EVAL] The model with the best validation mIoU (0.1979) was saved at iter 7000.
2021-04-08 10:52:39 [INFO]	[TRAIN] epoch=10, iter=8010/35000, loss=0.1795, lr=0.001979, batch_cost=1.0620, reader_cost=0.00013, ips=3.7664 samples/sec | ETA 07:57:43
2021-04-08 10:52:49 [INFO]	[TRAIN] epoch=10, iter=8020/35000, loss=0.1845, lr=0.001978, batch_cost=1.0571, reader_cost=0.00011, ips=3.7841 samples/sec | ETA 07:55:19
2021-04-08 10:53:00 [INFO]	[TRAIN] epoch=10, iter=8030/35000, loss=0.2287, lr=0.001977, batch_cost=1.1007, reader_cost=0.00010, ips=3.6342 samples/sec | ETA 08:14:44
2021-04-08 10:53:11 [INFO]	[TRAIN] epoch=10, iter=8040/35000, loss=0.2048, lr=0.001977, batch_cost=1.0854, reader_cost=0.00014, ips=3.6851 samples/sec | ETA 08:07:43
2021-04-08 10:53:23 [INFO]	[TRAIN] epoch=10, iter=8050/35000, loss=0.1982, lr=0.001976, batch_cost=1.1484, reader_cost=0.00013, ips=3.4832 samples/sec | ETA 08:35:48
2021-04-08 10:53:34 [INFO]	[TRAIN] epoch=10, iter=8060/35000, loss=0.2150, lr=0.001975, batch_cost=1.1500, reader_cost=0.00015, ips=3.4782 samples/sec | ETA 08:36:21
^C
Traceback (most recent call last):
  File "train.py", line 154, in <module>
    main(args)
  File "train.py", line 149, in main
    keep_checkpoint_max=args.keep_checkpoint_max)
  File "/home/aistudio/PaddleSeg/paddleseg/core/train.py", line 151, in train
    edges=edges)
  File "/home/aistudio/PaddleSeg/paddleseg/core/train.py", line 46, in loss_computation
    loss_list.append(losses['coef'][i] * loss_i(logits, labels))
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 902, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "/home/aistudio/PaddleSeg/paddleseg/models/losses/mixed_loss.py", line 56, in forward
    output = loss(logits, labels)
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 902, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "/home/aistudio/PaddleSeg/paddleseg/models/losses/lovasz_loss.py", line 53, in forward
    loss = lovasz_softmax_flat(vprobas, vlabels, classes=self.classes)
  File "/home/aistudio/PaddleSeg/paddleseg/models/losses/lovasz_loss.py", line 191, in lovasz_softmax_flat
    grad = lovasz_grad(fg_sorted)
  File "/home/aistudio/PaddleSeg/paddleseg/models/losses/lovasz_loss.py", line 98, in lovasz_grad
    jaccard[1:p] = jaccard[1:p] - jaccard[0:-1]
KeyboardInterrupt
</span></span>

Evaluate

In [20]
!python val.py \
       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \
       --model_path output/iter_7000/model.pdparams
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/setuptools/depends.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
2021-04-08 10:54:19 [INFO]	
---------------Config Information---------------
SOLVER:
  CROSS_ENTROPY_WEIGHT: dynamic
  LR: 0.005
  LR_POLICY: poly
  NUM_EPOCHS: 40
  OPTIMIZER: sgd
batch_size: 4
iters: 35000
learning_rate:
  decay:
    end_lr: 0.0
    power: 0.9
    type: poly
  value: 0.0025
loss:
  coef:
  - 1
  - 0.4
  types:
  - coef:
    - 0.8
    - 0.2
    losses:
    - type: CrossEntropyLoss
    - type: LovaszSoftmaxLoss
    type: MixedLoss
  - coef:
    - 0.8
    - 0.2
    losses:
    - type: CrossEntropyLoss
    - type: LovaszSoftmaxLoss
    type: MixedLoss
model:
  backbone:
    pretrained: https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
    type: HRNet_W18
  backbone_indices:
  - 0
  type: OCRNet
optimizer:
  momentum: 0.9
  type: sgd
  weight_decay: 4.0e-05
train_dataset:
  dataset_root: /home/aistudio/
  mode: train
  num_classes: 15
  train_path: /home/aistudio/train_list.txt
  transforms:
  - max_scale_factor: 2.0
    min_scale_factor: 0.5
    scale_step_size: 0.25
    type: ResizeStepScaling
  - max_rotation: 30
    type: RandomRotation
  - type: RandomHorizontalFlip
  - type: RandomVerticalFlip
  - crop_size:
    - 1024
    - 512
    type: RandomPaddingCrop
  - type: RandomBlur
  - brightness_range: 0.4
    contrast_range: 0.4
    saturation_range: 0.4
    type: RandomDistort
  - type: Normalize
  type: Dataset
val_dataset:
  dataset_root: /home/aistudio/
  mode: val
  num_classes: 15
  transforms:
  - type: Normalize
  type: Dataset
  val_path: /home/aistudio/val_list.txt
------------------------------------------------
W0408 10:54:19.346495 13276 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0408 10:54:19.346547 13276 device_context.cc:372] device: 0, cuDNN Version: 7.6.
2021-04-08 10:54:24 [INFO]	Loading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
2021-04-08 10:54:24,349 - INFO - Lock 140006487673616 acquired on /home/aistudio/.paddleseg/tmp/hrnet_w18_ssld
2021-04-08 10:54:24,349 - INFO - Lock 140006487673616 released on /home/aistudio/.paddleseg/tmp/hrnet_w18_ssld
2021-04-08 10:54:25 [INFO]	There are 1525/1525 variables loaded into HRNet.
2021-04-08 10:54:25 [INFO]	Loading pretrained model from output/iter_7000/model.pdparams
2021-04-08 10:54:26 [INFO]	There are 1583/1583 variables loaded into OCRNet.
2021-04-08 10:54:26 [INFO]	Loaded trained params of model successfully
2021-04-08 10:54:26 [INFO]	Start evaluating (total_samples=500, total_iters=500)...
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.INT32, but right dtype is VarType.BOOL, the right dtype will convert to VarType.INT32
  format(lhs_dtype, rhs_dtype, lhs_dtype))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.INT64, but right dtype is VarType.BOOL, the right dtype will convert to VarType.INT64
  format(lhs_dtype, rhs_dtype, lhs_dtype))
500/500 [==============================] - 93s 185ms/step - batch_cost: 0.1851 - reader cost: 8.3591e-
2021-04-08 10:55:59 [INFO]	[EVAL] #Images=500 mIoU=0.1979 Acc=0.9857 Kappa=0.5681 
2021-04-08 10:55:59 [INFO]	[EVAL] Class IoU: 
[0.9872 0.2389 0.4774 0.472  0.154  0.2929 0.3465 0.     0.     0.
 0.     0.     0.     0.     0.    ]
2021-04-08 10:55:59 [INFO]	[EVAL] Class Acc: 
[0.9937 0.3976 0.6139 0.5784 0.2579 0.399  0.4729 0.     0.     0.
 0.     0.     0.     0.     0.    ]
</span></span>

predict

In [21]
!python predict.py \
       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \
       --model_path output/iter_7000/model.pdparams \
       --image_path ../infer/4346.png \
       --save_dir output/result
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/setuptools/depends.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
2021-04-08 10:59:44 [INFO]	
---------------Config Information---------------
SOLVER:
  CROSS_ENTROPY_WEIGHT: dynamic
  LR: 0.005
  LR_POLICY: poly
  NUM_EPOCHS: 40
  OPTIMIZER: sgd
batch_size: 4
iters: 35000
learning_rate:
  decay:
    end_lr: 0.0
    power: 0.9
    type: poly
  value: 0.0025
loss:
  coef:
  - 1
  - 0.4
  types:
  - coef:
    - 0.8
    - 0.2
    losses:
    - type: CrossEntropyLoss
    - type: LovaszSoftmaxLoss
    type: MixedLoss
  - coef:
    - 0.8
    - 0.2
    losses:
    - type: CrossEntropyLoss
    - type: LovaszSoftmaxLoss
    type: MixedLoss
model:
  backbone:
    pretrained: https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
    type: HRNet_W18
  backbone_indices:
  - 0
  type: OCRNet
optimizer:
  momentum: 0.9
  type: sgd
  weight_decay: 4.0e-05
train_dataset:
  dataset_root: /home/aistudio/
  mode: train
  num_classes: 15
  train_path: /home/aistudio/train_list.txt
  transforms:
  - max_scale_factor: 2.0
    min_scale_factor: 0.5
    scale_step_size: 0.25
    type: ResizeStepScaling
  - max_rotation: 30
    type: RandomRotation
  - type: RandomHorizontalFlip
  - type: RandomVerticalFlip
  - crop_size:
    - 1024
    - 512
    type: RandomPaddingCrop
  - type: RandomBlur
  - brightness_range: 0.4
    contrast_range: 0.4
    saturation_range: 0.4
    type: RandomDistort
  - type: Normalize
  type: Dataset
val_dataset:
  dataset_root: /home/aistudio/
  mode: val
  num_classes: 15
  transforms:
  - type: Normalize
  type: Dataset
  val_path: /home/aistudio/val_list.txt
------------------------------------------------
W0408 10:59:44.776221 13753 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0408 10:59:44.776278 13753 device_context.cc:372] device: 0, cuDNN Version: 7.6.
2021-04-08 10:59:49 [INFO]	Loading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
2021-04-08 10:59:49,681 - INFO - Lock 140715049213264 acquired on /home/aistudio/.paddleseg/tmp/hrnet_w18_ssld
2021-04-08 10:59:49,681 - INFO - Lock 140715049213264 released on /home/aistudio/.paddleseg/tmp/hrnet_w18_ssld
2021-04-08 10:59:50 [INFO]	There are 1525/1525 variables loaded into HRNet.
2021-04-08 10:59:50 [INFO]	Number of predict images = 1
2021-04-08 10:59:50 [INFO]	Loading pretrained model from output/iter_7000/model.pdparams
2021-04-08 10:59:51 [INFO]	There are 1583/1583 variables loaded into OCRNet.
2021-04-08 10:59:51 [INFO]	Start to predict...
1/1 [==============================] - 1s 527ms/step
</span></span>
In [42]
%matplotlib inline
import matplotlib.pyplot as plt
img = Image.open('../infer/4346.png')
In [43]
# 原始图片
img
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff"><PIL.PngImagePlugin.PngImageFile image mode=RGB size=1536x1536 at 0x7F7E0C477A50></span></span>
In [ ]
img = Image.open('output/result/added_prediction/4346.png')
In [41]
# 预测结果
img
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff"><PIL.PngImagePlugin.PngImageFile image mode=RGB size=1536x1536 at 0x7F7E0C46D610></span></span>
In [ ]
img = Image.open('output/result/pseudo_color_prediction/4346.png')
In [39]
# 伪彩色标注
img
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff"><PIL.PngImagePlugin.PngImageFile image mode=P size=1536x1536 at 0x7F7E0C48A350></span></span>

Export a static graph model

In [9]
!python export.py \
       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \
       --model_path output/iter_8000/model.pdparams

Python predictive deployment

In [11]
!ls
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">benchmark  deploy     legacy   paddleseg     README.md	       slim	 val.py
configs    docs       LICENSE  predict.py    requirements.txt  tools
contrib    export.py  output   README_CN.md  setup.py	       train.py
</span></span>
In [12]
# 把infer.py复制到PaddleSeg主目录下
!cp deploy/python/infer.py infer.py 
In [13]
# 预测的伪彩色图片默认放在./output目录下
!python infer.py --config output/deploy.yaml --image_path ../infer/4346.png
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/setuptools/depends.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
</span></span>

Extra article: Handling data analysis and training tasks with script tasks

Python OS commands and common terminal command lines

  • Display the current path, corresponding to the terminal bashpwd
import os
print(os.getcwd())
  • git gets the PaddleSeg suite, which corresponds to the terminal bashgit clone https://gitee.com/paddlepaddle/PaddleSeg.git
import os
os.system("git clone https://gitee.com/paddlepaddle/PaddleSeg.git")
  • Install dependent libraries and specify sources
import os
os.system("cd PaddleSeg && pip install -r requirements.txt -i https://mirror.baidu.com/pypi/simple")
  • execute python file
import os
os.system("python make_list.py")

Basic Operations of Script Tasks

Collection of script task examples 

 

Guess you like

Origin blog.csdn.net/m0_68036862/article/details/131348646