Python-based OpenCV pointer dial detection system (with source code & technical documentation)

1. Background

The pointer mechanical dial has many advantages such as convenient installation and maintenance, simple structure, and anti-electromagnetic interference. It is widely used in industrial and mining enterprises, energy and measurement departments. With the increase in the number of instruments and the development of precision instrument technology, manual interpretation can no longer meet the needs of practical applications. With the continuous development of computer technology and image processing technology, the automatic meter reading technology of pointer mechanical watch came into being. This technology improves the automation and real-time performance of dial recognition, and will replace the reading method of traditional industrial instruments and be widely used.

2. Research status at home and abroad

The model of the identified object : HCDL821-YB lightning protection online monitoring device

Difficulties in identification :
1. The inner dial is very deep, resulting in serious shadows on the inner dial surface, which makes it more difficult to identify the inner ellipse.
2. The outer contour of the electric meter is seriously reflective. If a point light source is used for lighting, bright spots and glare will occur.
3. The inner dial of the meter is seriously reflective, resulting in different colors at different angles, which is not conducive to the setting of the threshold.
4. The outer contour of the meter has been rounded and rounded, resulting in a decrease in the accuracy of the recognized elliptical outer contour.

Dial features : The dial and pointer have colors, and different color ranges represent different scale value ranges.

3. Characteristics of the algorithm

1. It can recognize the dial under different lighting conditions: strong light, normal light, weak light, point light source, parallel light source, etc.
2. Dials that can identify different shooting angles: front, left oblique, right oblique, front oblique, rear oblique, etc.
3. It can recognize dials at different distances: close-up shooting, medium-distance shooting, long-distance shooting, etc.
4. It can recognize scenes with disturbing colors: red table, white wallpaper, green bucket, blue basin, etc.
5. It can recognize dial photos of different sizes and pixels.
6. High recognition efficiency: the running time of the whole program is within 10 seconds, while the traditional ellipse detection program takes more than one minute.

4. Algorithm flow chart

insert image description here

5. Algorithm process visualization

insert image description here

6. Preliminary processing

preliminary_pretreatment()
This function searches for the approximate position of the dial. The highlight of this function is a parameter self-adjusting cv.HoughCircles().
The param2 parameter will be automatically adjusted according to the result of finding the circle. (param2 is the cumulative threshold of the circle center in the detection stage. The smaller it is, the more false circles can be detected. The larger it is, the fewer circles can pass the detection and the closer it is to a perfect circle.)

circles, param, x, y, r = [ ], 50, 0, 0, 0
     while 1:
             circles = cv.HoughCircles(pre, cv.HOUGH_GRADIENT, 1, 20, param1=100, param2=param, minRadius=100, maxRadius=300)
               if  circles  is  None:
                         param = param - 5
                         continue
               circles = np.uint16(np.around(circles))
               for i in circles[0 ,  : ]:
                         if  i[2]  >  r and i[2]  <  width / 2:
                                  r = i[2]
                                  x = i[0]
                                  y = i[1]
               break
      

7. Preprocessing

pretreatment()
This function is a conventional image preprocessing step, which performs grayscale, Gaussian filter noise reduction, convolution blur, edge detection, and morphological closed transformation on the image.
![Insert picture description here](https://img-blog.csdnimg.cn/4b57bf13a1e04ee982b9c18a93baf0a8.png#pic_center

gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
img_thresh = cv.GaussianBlur(gray, (5, 5), 0)
kernel = np.ones((5, 5), np.float32) / 25
img_thresh = cv.filter2D(img_thresh, -1, kernel)
edges = cv.Canny(img_thresh, 50, 150, apertureSize=3)
Matrix = np.ones((2, 2), np.uint8)
img_edge = cv.morphologyEx(edges, cv.MORPH_CLOSE, Matrix)

insert image description here
findEllipse()
This function mainly uses the cv.fitEllipse() function to fit ellipses, and then performs multi-condition screening on the fitted multiple ellipses. One of the important screening conditions is obtained according to the preliminary preprocessing center() function. Approximate range.

    contours, hierarchy = cv.findContours(img, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
    X, Y, ma, MA,angle = 0, 0, 0, 0, 0
    height, width, channels = img_copy.shape

    for   ind, cont  in  enumerate(contours):
         if (len(cont) > 5):
              (X0, Y0), (MA0, ma0), angle0 = cv.fitEllipse(cont)
              if  ma0 < min(width,height)  and  MA0 < max(width,height)  and  distance(X0, Y0, x, y) < 1 / 2 * r  and  ma0 > ma  and  MA0 > MA(等):
                   X, Y, MA, ma, angle = X0, Y0, MA0, ma0, angle0

insert image description here

8. Perspective transformation corrects the shooting angle

findvertex()

points = []
img1 = np.zeros((img_copy.shape[0], img_copy.shape[1]), dtype=np.uint8)
cv.ellipse(img1, (int(X), int(Y)), (int(MA / 2), int(ma / 2)), angle, 0, 360, (255, 255, 255), 2)
img2 = np.zeros((img_copy.shape[0], img_copy.shape[1]), dtype=np.uint8)
cv.line(img2, (int(X - math.cos(angle) * ma), int(Y + math.sin(angle) * ma)),
        (int(X + math.cos(angle) * ma), int(Y - math.sin(angle) * ma)), (255, 255, 255), 1)
cv.line(img2, (int(X + math.sin(angle) * MA), int(Y + math.cos(angle) * MA)),
        (int(X - math.sin(angle) * MA), int(Y - math.cos(angle) * MA)), (255, 255, 255), 1)
for i in range(img_copy.shape[0]):
    for j in range(img_copy.shape[1]):
        if img1[i, j] > 0 and img2[i, j] > 0:
            points.append((j, i))
point = list([])
n = points[0][0]
for i in range(len(points)):
    if abs(points[i][0] - n) > 2:
        point.append(points[i])
        n = points[i][0]
point.append(points[0])
img3 = np.zeros((img_copy.shape[0], img_copy.shape[1]), dtype=np.uint8)
cv.ellipse(img3, (int(X), int(Y)), (int(MA / 2), int(ma / 2)), angle, 0, 360, (255, 255, 255), -1)
for i in range(img_copy.shape[0]):
    for j in range(img_copy.shape[1]):
        if img3[i, j] == 0:
            img_copy[i,j] = 255
order = []
order.append(point[np.argmin(point, axis=0)[1]])
order.append(point[np.argmax(point, axis=0)[1]])
order.append(point[np.argmin(point, axis=0)[0]])
order.append(point[np.argmax(point, axis=0)[0]])
return img_copy,order

insert image description here
perspective_transformation()
perspective transformation refers to the center projection transformation between two planes. This function uses perspective transformation to correct the shooting angle and reduce the roundness error of the dial.

w = min(img_copy.shape[0], img_copy.shape[1])
pts1 = np.float32([[point[0][0], point[0][1]], [point[1][0], point[1][1]],
                             [point[2][0], point[2][1]],[point[3][0], point[3][1]]])
pts2 = np.float32([[w / 2, 0], [w / 2, w], [0, w / 2], [w, w / 2]])
M = cv.getPerspectiveTransform(pts1, pts2)
dst = cv.warpPerspective(img_copy, M, (w, w))

insert image description here

9. Affine transformation makes the dial level

alignment()
affine transformation is a linear transformation between two-dimensional coordinates and two-dimensional coordinates, which maintains the "straightness" of two-dimensional graphics (the straight line is still a straight line after transformation) and "parallelism" (two The relative position relationship between the two-dimensional graphics remains unchanged, the parallel lines are still parallel lines, and the position order of the points on the line remains unchanged).

x0, y0,xlen,ylen = farpoint(point_k,point_k[-1]),x0 - point_k[-1][0],y0 - point_k[-1][1]
deg = math.degrees(rad)
image_center = tuple(np.array(img_copy.shape)[:2] / 2)
rot_mat = cv.getRotationMatrix2D(image_center, deg, 1)
dst_copy = cv.warpAffine(img_copy, rot_mat, img_copy.shape[:2], flags=cv.INTER_LINEAR)
output = cv.warpAffine(output, rot_mat, output.shape[:2], flags=cv.INTER_LINEAR)

insert image description here

10. Read the dial scale

farpoint() and nearpoint()#Find all the dividing points in the dial.
points2ciecle()#To find the center coordinates and radius of the dial arc according to any three points on the dial
cal_ang()#Output the angle formed by the three points Divide the
dial according to each dividing point of the dial (divided into four parts: 0, 1, 2, and 3). Finally, according to the two angles obtained earlier and the partition where the pointer is located, the final reading of the pointer on the dial is obtained, thereby completing the identification of the entire meter dial.

insert image description here

Project PPT download link

[PPT]Python based on OpenCV pointer dial detection system PPT

Complete source code download link (with installation tutorial & demo video & copywriting):

Guess you like

Origin blog.csdn.net/cheng2333333/article/details/126454578#comments_26318805