"OpenCV 3 Computer Vision: Python Language Implementation" Study Notes - Thinking of Basic Motion Detection in Target Tracking

New to Python and OpenCV, learning computer vision by typing the code in this book. When implementing according to the code in the book, I found that some parts are always not as good as the author's implementation (the code is completely different). After repeated pondering and comparison experiments, I understood the reason: the camera used by the author is different from mine, so the pixels of the pictures or videos taken are different, so some sizes or other parameters in the code cannot be completely used by the author. given, it needs to be adjusted according to the actual situation.

This article mainly records the problems I found and solved when I implemented "Basic Motion Detection" in Chapter 8 [Target Tracking] of "OpenCV 3 Computer Vision: Python Language Implementation":

The source code of the book is as follows

import cv2
import numpy as np

camera = cv2.VideoCapture(0)

es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (10,10))
kernel = np.ones((5,5),np.uint8)
background = None

while (True):
  ret, frame = camera.read ()
  if background is None:
    background = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    background = cv2.GaussianBlur(background, (21, 21), 0)
    continue
  
  gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
  gray_frame = cv2.GaussianBlur(gray_frame, (21, 21), 0)
  diff = cv2.absdiff(background, gray_frame)
  diff = cv2.threshold(diff, 25, 255, cv2.THRESH_BINARY)[1]
  diff = cv2.dilate(diff, es, iterations = 2)
  image, cnts, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  
  for c in cnts:
    if cv2.contourArea(c) < 1500:
      continue
    (x, y, w, h) = cv2.boundingRect(c)
    cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 255, 0), 2)
  
  cv2.imshow("contours", frame)
  cv2.imshow("dif", diff)
  #if cv2.waitKey(1000 / 12) & 0xff == ord("q"):
  if cv2.waitKey(90) & 0xff == ord("q"):
      break

cv2.destroyAllWindows()
camera.release()

I copied the code exactly according to the book, ran the program, and found that the entire video was detected as a moving target, and human movements could not be detected. Guess is in camera = cv2.VideoCapture(0)

Immediately after reading the frame and assigning it to the background, the camera has just been turned on, and the light entering the camera should not be particularly sufficient, and then compare the collected image with the background, the whole frame should be different (the light intensity is too strong large), so it has always been detected that the positive and negative frames are different.

To solve this problem, add waiting time time.sleep(5) after camera = cv2.VideoCapture(0) and before ret, frame = camera.read( ). After running the program, the problem is solved.


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324520244&siteId=291194637