Mask RCNN:(大家有疑问的请在评论区留言)
如果对原理不了解的话,可以花十分钟先看一下我的这篇博文,在来进行实战演练,这篇博文将是让大家对mask rcnn 进行一个入门,我在后面的博文中会介绍mask rcnn 如何用于 多人关键点检测和多人姿态估计,以及如何利用mask rcnn 训练自己的数据集,以及mobile_net版的mask rcnn(ps:我正在做,等做完我会分享到我的github上,感兴趣的朋友可以继续关注我后续的博客,很快就会更新。)
原理:MaskRCNN原理
大家先到GitHub上下载项目源码:Mask RCNN项目源码
一、配置环境:
Mask R-CNN是基于Python3,Keras,TensorFlow。
- Python 3.4+
- TensorFlow 1.3+
- Keras 2.0.8+
- Jupyter Notebook
- Numpy, skimage, scipy
二、安装
安装依赖关系
pip3安装-r requirements.txt
克隆这个存储库
从存储库根目录运行安装程序
python3 setup.py 安装
4. 下载好训练好的权重mask_rcnn_coco.h5
如果需要在COCO数据集上训练或测试,需要安装
pycocotools
,clone
下来,make
生成对应的文件,make之后将生成的pycocotools文件夹复制到samples中的coco文件夹。- Linux: https://github.com/waleedka/coco
- Windows: https://github.com/philferriere/cocoapi. You must have the Visual C++ 2015 build tools on your path (see the repo for additional details)
coco数据集下载:
要在MS COCO上进行培训或测试,您还需要:
- pycocotools(安装说明如上)
- MS COCO数据集(Ubuntu 建议采用 wget 命令直接Ubuntu终端下载)
wget http://images.cocodataset.org/zips/train2014.zip #下载coco train2014训练集图片
wget http://images.cocodataset.org/zips/val2014.zip #下载coco val2014验证集图片
wget http://images.cocodataset.org/zips/test2014.zip #下载coco test2014 测试集图片
- 下载5K minival和35K validation-minus-minival子集。 更快的R-CNN实施细节。
wget https://dl.dropboxusercontent.com/s/o43o90bna78omob/instances_minival2014.json.zip?dl=0 #下载minival
wget https://dl.dropboxusercontent.com/s/s3tw5zcg7395368/instances_valminusminival2014.json.zip?dl=0 #下载validation-minus-minival
如果您使用Docker,则代码已验证可在此Docker容器上使用(如果下载不下来,请在评论区留邮箱)。
三、入门(需要安装 jupyter notebook)参考我的博文:jupyter notebook安装并设置远程访问
demo.ipynb是最简单的开始。 它展示了一个使用MS COCO预先训练的模型来分割自己图像中的对象的例子。 它包括在任意图像上运行对象检测和实例分割的代码。
train_shapes.ipynb显示了如何在您自己的数据集上训练Mask R-CNN。 这款笔记本引入了一个玩具数据集(Shapes)来演示新数据集的训练。
inspect_data.ipynb 。 该笔记本可视化不同的预处理步骤以准备培训数据。
inspect_model.ipynb这个笔记本深入到执行检测和分割对象的步骤。 它提供了管道每一步的可视化。
inspect_weights.ipynb这款笔记本检查训练好的模型的重量并查找异常和奇怪的模式。
1、打开samples里面demo.ipynb(用jupyter notebook)
demo.ipynb
import os import sys import random import math import numpy as np import skimage.io import matplotlib import matplotlib.pyplot as plt # Root directory of the project ROOT_DIR = os.path.abspath("../Desktop/Mask_RCNN-master") #项目的文件夹 # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library import mrcnn.utils import mrcnn.model as modellib from mrcnn import visualize # Import COCO config sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # 导入coco数据集,即下载5K minival和35K validation-minus-minival子集 (放入coco文件夹中) import coco %matplotlib inline # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") #下载的训练权重放入项目文件夹中 # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) # Directory of images to run detection on IMAGE_DIR = os.path.join(ROOT_DIR, "images") #测试图片
Configurations
class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 config = InferenceConfig() config.display() Configurations: BACKBONE_SHAPES [[256 256] [128 128] [ 64 64] [ 32 32] [ 16 16]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [ 0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.5 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 1 IMAGE_MAX_DIM 1024 IMAGE_MIN_DIM 800 IMAGE_PADDING True IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.002 MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [ 123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME coco NUM_CLASSES 81 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 2 RPN_BBOX_STD_DEV [ 0.1 0.1 0.2 0.2] RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TRAIN_ROIS_PER_IMAGE 128 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001
生成模型并加载训练权重
# Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True)
种类名称
# COCO Class names # Index of the class in the list is its ID. For example, to get ID of # the teddy bear class, use: class_names.index('teddy bear') class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
随机选取图片测试
# Load a random image from the images folder file_names = next(os.walk(IMAGE_DIR))[2] image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names))) # Run detection results = model.detect([image], verbose=1) # Visualize results r = results[0] visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']) Processing 1 images image shape: (476, 640, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 120.30000 image_metas shape: (1, 89) min: 0.00000 max: 1024.00000
上面的demo.ipynb是对mask rcnn项目的简单运用,当你成功实现之后便是对这个项目的简单入门。
2. inspect_model.ipynb
import os import sys import random import math import re import time import numpy as np import tensorflow as tf import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as patches # Root directory of the project ROOT_DIR = os.path.abspath("../../") #项目文件夹 # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn import utils from mrcnn import visualize from mrcnn.visualize import display_images import mrcnn.model as modellib from mrcnn.model import log %matplotlib inline # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) # Path to Shapes trained weights #SHAPES_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_shapes.h5")
Configurations
# MS COCO Dataset import coco config = coco.CocoConfig() COCO_DIR = "path to COCO dataset" # TODO: enter value here
# Override the training configurations with a few # changes for inferencing. class InferenceConfig(config.__class__): # Run detection on one image at a time GPU_COUNT = 1 IMAGES_PER_GPU = 1 config = InferenceConfig() config.display()
Notebook Preferences
# Device to load the neural network on. # Useful if you're training a model on the same # machine, in which case use CPU and leave the # GPU for training. DEVICE = "/cpu:0" # /cpu:0 or /gpu:0 # Inspect the model in training or inference modes # values: 'inference' or 'training' # TODO: code for 'training' test mode not ready yet TEST_MODE = "inference"def get_ax(rows=1, cols=1, size=16): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Adjust the size attribute to control how big to render images """ _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return ax
Load Validation Dataset
# Build validation dataset if config.NAME == 'shapes': dataset = shapes.ShapesDataset() dataset.load_shapes(500, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1]) elif config.NAME == "coco": dataset = coco.CocoDataset() dataset.load_coco(COCO_DIR, "minival") #将下载好的 train2014 val2014 test2014 图片解压到coco文件夹 # Must call before using the dataset dataset.prepare() print("Images: {}\nClasses: {}".format(len(dataset.image_ids), dataset.class_names
loading annotations into memory... Done (t=4.86s) creating index... index created! Images: 35185 Classes: ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
如果报错显示 ../coco/annotations缺少instances_valminusminival2014.json和instances_minival2014.json,就在终端输入以下命令:
wget http://images.cocodataset.org/annotations/annotations_trainval2014.zip #解压放到对应文件夹即可
Load Model
# Create model in inference mode with tf.device(DEVICE): model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Set weights file path if config.NAME == "shapes": weights_path = SHAPES_MODEL_PATH elif config.NAME == "coco": weights_path = COCO_MODEL_PATH # Or, uncomment to load the last model you trained # weights_path = model.find_last()[1] # Load weights print("Loading weights ", weights_path) model.load_weights(weights_path, by_name=True)
Run Detection¶
image_id = random.choice(dataset.image_ids) image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False) info = dataset.image_info[image_id] print("image ID: {}.{} ({}) {}".format(info["source"], info["id"], image_id, dataset.image_reference(image_id))) # Run object detection results = model.detect([image], verbose=1) # Display results ax = get_ax(1) r = results[0] visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], dataset.class_names, r['scores'], ax=ax, title="Predictions") log("gt_class_id", gt_class_id) log("gt_bbox", gt_bbox) log("gt_mask", gt_mask)
Precision-Recall
# Draw precision-recall curve AP, precisions, recalls, overlaps = utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r['rois'], r['class_ids'], r['scores'], r['masks']) visualize.plot_precision_recall(AP, precisions, recalls)
# Grid of ground truth objects and their predictions visualize.plot_overlaps(gt_class_id, r['class_ids'], r['scores'], overlaps, dataset.class_names)
Compute mAP @ IoU=50 on Batch of Images
# Compute VOC-style Average Precision def compute_batch_ap(image_ids): APs = [] for image_id in image_ids: # Load image image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False) # Run object detection results = model.detect([image], verbose=0) # Compute AP r = results[0] AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r['rois'], r['class_ids'], r['scores'], r['masks']) APs.append(AP) return APs # Pick a set of random images image_ids = np.random.choice(dataset.image_ids, 10) APs = compute_batch_ap(image_ids) print("mAP @ IoU=50: ", np.mean(APs))
mAP @ IoU=50: 0.656323084916
Step by Step Prediction
1.a RPN Targets
# Generate RPN trainig targets # target_rpn_match is 1 for positive anchors, -1 for negative anchors # and 0 for neutral anchors. target_rpn_match, target_rpn_bbox = modellib.build_rpn_targets( image.shape, model.anchors, gt_class_id, gt_bbox, model.config) log("target_rpn_match", target_rpn_match) log("target_rpn_bbox", target_rpn_bbox) positive_anchor_ix = np.where(target_rpn_match[:] == 1)[0] negative_anchor_ix = np.where(target_rpn_match[:] == -1)[0] neutral_anchor_ix = np.where(target_rpn_match[:] == 0)[0] positive_anchors = model.anchors[positive_anchor_ix] negative_anchors = model.anchors[negative_anchor_ix] neutral_anchors = model.anchors[neutral_anchor_ix] log("positive_anchors", positive_anchors) log("negative_anchors", negative_anchors) log("neutral anchors", neutral_anchors) # Apply refinement deltas to positive anchors refined_anchors = utils.apply_box_deltas( positive_anchors, target_rpn_bbox[:positive_anchors.shape[0]] * model.config.RPN_BBOX_STD_DEV) log("refined_anchors", refined_anchors, )
target_rpn_match shape: (65472,) min: -1.00000 max: 1.00000 target_rpn_bbox shape: (256, 4) min: -5.19860 max: 2.59641 positive_anchors shape: (14, 4) min: 5.49033 max: 973.25483 negative_anchors shape: (242, 4) min: -22.62742 max: 1038.62742 neutral anchors shape: (65216, 4) min: -362.03867 max: 1258.03867 refined_anchors shape: (14, 4) min: 0.00000 max: 1023.99994
# Display positive anchors before refinement (dotted) and # after refinement (solid). visualize.draw_boxes(image, boxes=positive_anchors, refined_boxes=refined_anchors, ax=get_ax())
1.b RPN Prediction
# Run RPN sub-graph pillar = model.keras_model.get_layer("ROI").output # node to start searching from # TF 1.4 introduces a new version of NMS. Search for both names to support TF 1.3 and 1.4 nms_node = model.ancestor(pillar, "ROI/rpn_non_max_suppression:0") if nms_node is None: nms_node = model.ancestor(pillar, "ROI/rpn_non_max_suppression/NonMaxSuppressionV2:0") rpn = model.run_graph([image], [ ("rpn_class", model.keras_model.get_layer("rpn_class").output), ("pre_nms_anchors", model.ancestor(pillar, "ROI/pre_nms_anchors:0")), ("refined_anchors", model.ancestor(pillar, "ROI/refined_anchors:0")), ("refined_anchors_clipped", model.ancestor(pillar, "ROI/refined_anchors_clipped:0")), ("post_nms_anchor_ix", nms_node), ("proposals", model.keras_model.get_layer("ROI").output), ])
rpn_class shape: (1, 65472, 2) min: 0.00000 max: 1.00000 pre_nms_anchors shape: (1, 10000, 4) min: -362.03867 max: 1258.03870 refined_anchors shape: (1, 10000, 4) min: -1385.67920 max: 2212.44043 refined_anchors_clipped shape: (1, 10000, 4) min: 0.00000 max: 1024.00000 post_nms_anchor_ix shape: (1000,) min: 0.00000 max: 1477.00000 proposals shape: (1, 1000, 4) min: 0.00000 max: 1.00000
# Show top anchors by score (before refinement) limit = 100 sorted_anchor_ids = np.argsort(rpn['rpn_class'][:,:,1].flatten())[::-1] visualize.draw_boxes(image, boxes=model.anchors[sorted_anchor_ids[:limit]], ax=get_ax())
# Show top anchors with refinement. Then with clipping to image boundaries limit = 50 ax = get_ax(1, 2) visualize.draw_boxes(image, boxes=rpn["pre_nms_anchors"][0, :limit], refined_boxes=rpn["refined_anchors"][0, :limit], ax=ax[0]) visualize.draw_boxes(image, refined_boxes=rpn["refined_anchors_clipped"][0, :limit], ax=ax[1])
# Show refined anchors after non-max suppression limit = 50 ixs = rpn["post_nms_anchor_ix"][:limit] visualize.draw_boxes(image, refined_boxes=rpn["refined_anchors_clipped"][0, ixs], ax=get_ax())
# Show final proposals # These are the same as the previous step (refined anchors # after NMS) but with coordinates normalized to [0, 1] range. limit = 50 # Convert back to image coordinates for display h, w = config.IMAGE_SHAPE[:2] proposals = rpn['proposals'][0, :limit] * np.array([h, w, h, w]) visualize.draw_boxes(image, refined_boxes=proposals, ax=get_ax())
Stage 2: Proposal Classification
2.a Proposal Classification
# Get input and output to classifier and mask heads. mrcnn = model.run_graph([image], [ ("proposals", model.keras_model.get_layer("ROI").output), ("probs", model.keras_model.get_layer("mrcnn_class").output), ("deltas", model.keras_model.get_layer("mrcnn_bbox").output), ("masks", model.keras_model.get_layer("mrcnn_mask").output), ("detections", model.keras_model.get_layer("mrcnn_detection").output), ])
# Get detection class IDs. Trim zero padding. det_class_ids = mrcnn['detections'][0, :, 4].astype(np.int32) det_count = np.where(det_class_ids == 0)[0][0] det_class_ids = det_class_ids[:det_count] detections = mrcnn['detections'][0, :det_count] print("{} detections: {}".format( det_count, np.array(dataset.class_names)[det_class_ids])) captions = ["{} {:.3f}".format(dataset.class_names[int(c)], s) if c > 0 else "" for c, s in zip(detections[:, 4], detections[:, 5])] visualize.draw_boxes( image, refined_boxes=utils.denorm_boxes(detections[:, :4], image.shape[:2]), visibilities=[2] * len(detections), captions=captions, title="Detections", ax=get_ax())
# Proposals are in normalized coordinates. Scale them # to image coordinates. h, w = config.IMAGE_SHAPE[:2] proposals = np.around(mrcnn["proposals"][0] * np.array([h, w, h, w])).astype(np.int32) # Class ID, score, and mask per proposal roi_class_ids = np.argmax(mrcnn["probs"][0], axis=1) roi_scores = mrcnn["probs"][0, np.arange(roi_class_ids.shape[0]), roi_class_ids] roi_class_names = np.array(dataset.class_names)[roi_class_ids] roi_positive_ixs = np.where(roi_class_ids > 0)[0] # How many ROIs vs empty rows? print("{} Valid proposals out of {}".format(np.sum(np.any(proposals, axis=1)), proposals.shape[0])) print("{} Positive ROIs".format(len(roi_positive_ixs))) # Class counts print(list(zip(*np.unique(roi_class_names, return_counts=True))))
1000 Valid proposals out of 1000 71 Positive ROIs [('BG', 929), ('airplane', 23), ('car', 11), ('person', 37)]
# Show final detections ixs = np.arange(len(keep)) # Display all # ixs = np.random.randint(0, len(keep), 10) # Display random sample captions = ["{} {:.3f}".format(dataset.class_names[c], s) if c > 0 else "" for c, s in zip(roi_class_ids[keep][ixs], roi_scores[keep][ixs])] visualize.draw_boxes( image, boxes=proposals[keep][ixs], refined_boxes=refined_proposals[keep][ixs], visibilities=np.where(roi_class_ids[keep][ixs] > 0, 1, 0), captions=captions, title="Detections after NMS", ax=get_ax())
Stage 3: Generating Masks
3.a Mask Targets
display_images(np.transpose(gt_mask, [2, 0, 1]), cmap="Blues")
display_images(det_mask_specific[:4] * 255, cmap="Blues", interpolation="none")
display_images(det_masks[:4] * 255, cmap="Blues", interpolation="none")
Visualize Activations
# Get activations of a few sample layers activations = model.run_graph([image], [ ("input_image", model.keras_model.get_layer("input_image").output), ("res4w_out", model.keras_model.get_layer("res4w_out").output), # for resnet100 ("rpn_bbox", model.keras_model.get_layer("rpn_bbox").output), ("roi", model.keras_model.get_layer("ROI").output), ])
# Input image (normalized) _ = plt.imshow(modellib.unmold_image(activations["input_image"][0],config))
# Backbone feature map display_images(np.transpose(activations["res4w_out"][0,:,:,:4], [2, 0, 1]))
# Histograms of RPN bounding box deltas plt.figure(figsize=(12, 3)) plt.subplot(1, 4, 1) plt.title("dy") _ = plt.hist(activations["rpn_bbox"][0,:,0], 50) plt.subplot(1, 4, 2) plt.title("dx") _ = plt.hist(activations["rpn_bbox"][0,:,1], 50) plt.subplot(1, 4, 3) plt.title("dw") _ = plt.hist(activations["rpn_bbox"][0,:,2], 50) plt.subplot(1, 4, 4) plt.title("dh") _ = plt.hist(activations["rpn_bbox"][0,:,3], 50)
# Histograms of RPN bounding box deltas plt.figure(figsize=(12, 3)) plt.subplot(1, 4, 1) plt.title("dy") _ = plt.hist(activations["rpn_bbox"][0,:,0], 50) plt.subplot(1, 4, 2) plt.title("dx") _ = plt.hist(activations["rpn_bbox"][0,:,1], 50) plt.subplot(1, 4, 3) plt.title("dw") _ = plt.hist(activations["rpn_bbox"][0,:,2], 50) plt.subplot(1, 4, 4) plt.title("dh") _ = plt.hist(activations["rpn_bbox"][0,:,3], 50)
# Distribution of y, x coordinates of generated proposals plt.figure(figsize=(10, 5)) plt.subplot(1, 2, 1) plt.title("y1, x1") plt.scatter(activations["roi"][0,:,0], activations["roi"][0,:,1]) plt.subplot(1, 2, 2) plt.title("y2, x2") plt.scatter(activations["roi"][0,:,2], activations["roi"][0,:,3]) plt.show()