Python implements gabor filter extraction texture feature extraction finger vein texture feature finger vein cutting code

Reference blog: https://blog.csdn.net/xue_wenyuan/article/details/51533953

     https://blog.csdn.net/jinshengtao/article/details/17797641

The Fourier transform is a powerful tool in signal processing that can help us transform images from the spatial domain to the frequency domain and extract features that are not easily extracted in the spatial domain. But after Fourier transform,

  The frequency features of the image at different positions are often mixed together, but the Gabor filter can extract the spatial local frequency features, which is an effective texture detection tool.

In image processing, the Gabor function is a linear filter used for edge extraction. The frequency and direction representation of the Gabor filter is similar to that of the human visual system. The study found that the Gabor filter is very suitable for texture expression and separation. In the spatial domain, a two-dimensional Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave.

 

The expression of the gabor kernel function:

  Complex expression:

 

  Can be disassembled: real part:

 

       Imaginary part:

      

in:

and

 

Parameter introduction:

Orientation ( θ ): This parameter specifies the orientation of the Gabor function parallel fringes, it takes values ​​from 0 to 360 degrees

Figure 4: The Gabor Filter with the �� of 0°, 45°, 90°.

 

Wavelength ( λ ): Its value is specified in pixels, usually greater than or equal to 2. But not greater than one-fifth the size of the input image.

Figure 7: The Gabor Filter with the �� of 3, 8.

 

Phase Offset ( φ ): It can take values ​​from -180 degrees to 180 degrees. Among them, 0he180 degrees correspond to the center-on and center-off functions of center symmetry, respectively, while -90 degrees and 90 degrees correspond to anti-symmetric functions.

Figure 5: The Gabor Filter with the �� of 17°, 180°.

 

Aspect ratio ( γ ): The spatial aspect ratio, which determines the ellipticity of the Gabor function shape (support, which I translate as shape ). When γ=1, the shape is round. When γ < 1, the shape elongates with the direction of the parallel stripes. Usually this value is 0.5

Figure 6: The Gabor Filter with the �� of 14, 45, 110.

 

 

Bandwidth (b): The half-response spatial frequency bandwidth b of a Gabor filter is related to the ratio of σ/λ, where σ represents the standard deviation of the Gaussian factor of the Gabor function, as follows:

 

The value of σ cannot be set directly, it only varies with the bandwidth b. The bandwidth value must be a positive real number, usually 1. In this case, the relationship between standard deviation and wavelength is: σ= 0.56 λ. The smaller the bandwidth, the larger the standard deviation, the larger the Gabor shape, and the greater the number of visible parallel excitatory and inhibitory stripes.

 

Well done.

Now on to the subject, we extract texture features.

  Extracting texture features, as well as enhancing texture features, many times we have to extract the ROI region of interest for operation. The other spaces on many pictures actually don't have much effect on us, and also affect the running speed of the program. Then we only take the ROI area for texture extraction.

First look at the original picture of the finger vein:

  

 

There are many areas in this picture. Generally, we only need the ROI area with the most finger vein texture in the middle part.

 

Code:

#!/usr/bin/python
#coding:utf-8
import numpy as np
import os
import cv2

def pathFile(path):
    return os.getcwd() + '/' + path

def brightestColumn(img):
    w, h = img.shape
    r = range(h / 2, h - 1)
    c = range(0, w - 1)
    return img[c][:,r].sum(axis=0).argmax()

#构建GABOR滤波器
def build_filters():
    """ returns a list of kernels in several orientations
    """
    filters = []
    ksize = 31                                                                  # gaborl scale here is a 
    for theta in np.arange(0, np.pi, np.pi / 4):                                # gaborl direction 0 45 90 135 Different angle scales will lead to different filtered images 
        
        params = { ' ksize ' :(ksize, ksize), ' sigma ' :3.3, ' theta ' :theta, ' lambd ' :18.3 ,    
                   ' gamma ' :4.5, ' psi ' :0.89, 'ktype ' :cv2.CV_32F}
                                                                             # The larger the gamma, the smaller the kernel function image, and the number of stripes remains unchanged. The larger the sigma, the larger the stripes and the image. 
                                                                            # psi here is close to 0 degrees, with white stripes as the center, and 180 degrees with black stripes as the center Center 
                                                                            # theta represents the stripe rotation angle 
                                                                            # lambd is the wavelength, the larger the wavelength, the larger the stripe 
        kern = cv2.getGaborKernel(**params) #Create                                     a kernel 
        kern /= 1.5* kern.sum()
        filters.append((kern,params))
    return filters                                                          

#Filtering process 
def process(img, filters):
     """ returns the img filtered by the filter list
     """ 
    accum = np.zeros_like(img) #Initialize                                               a matrix of the same size as img 
    for kern,params in filters:
        fimg = cv2.filter2D(img, cv2.CV_8UC3, kern)                          # The 2D filter function kern is its filter template 
        np.maximum(accum, fimg, accum)                                       # Parameter 1 and parameter 2 are compared bit by bit and the larger one is stored in parameter 3 here It is to make the texture features more obvious and 
    return accum

#Get the top and bottom values ​​of the region of interest for cutting and displaying the image 
def getRoiHCut2(img, p0):
    h, w = img.shape

    maxTop = np.argmax(img[0: h / 2, 0])                                     # Traverse in a certain area and select the specific height and width of the finger vein edge combined with the image 
    minTop = np.argmax(img[0: h / 2, w-1 ])
     if (maxTop < 65 ):
        maxBottom = np.argmax(img[(13 * h / 16): 40*h/48  , 0]) + 3 * h / 4
        minBottom = np.argmax(img[(13 * h / 16): 40*h/48, w-1]) + 3 * h / 4
    else:
        maxBottom = np.argmax(img[(3 * h / 4): h  , 0]) + 3 * h / 4
        minBottom = np.argmax(img[(3 * h / 4): h, w-1]) + 3 * h / 4
    maxTop = (2*maxTop + minTop) / 3
    maxBottom = (maxBottom + 2*minBottom) / 3

    return img[maxTop:maxBottom,:]

#Get the area of ​​interest 
def getRoi(img):
    height, width = img.shape
    heightDist = height / 4

    w = img.copy()
    w1 = w[heightDist:3 * heightDist,width / 4:]
    p0 = brightestColumn(w1) + heightDist + height / 2 #Add                       the height of the finger edge to three-quarters of the original height   
    pCol = w[:,p0:p0 + 1 ]

    pColInv = pCol[::-1]

    clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) #Build               a limited contrast adaptive histogram equalizer   

    w1_2 = clahe.apply(w[:, (p0 /20):(p0 + p0 / 2)])                        #The width of the interception area is about 1.5 times the height of p0 apply is to get a return value here is to facilitate the transfer of parameters 
    w2 = getRoiHCut2(w1_2, p0)

    res = cv2.resize(w2, (270, 150), interpolation=cv2.INTER_CUBIC)

    return clahe.apply(res)

def logImg(img):
     return img.astype(float) / 255                                           # Convert image data to 0-1 and store 

mDir = []
imgs = []
dbDir = os.getcwd() + "/db100/"
people = os.listdir(dbDir)
people.sort()

for person in people:
    personDir = dbDir + person + "/"
    hands = os.listdir(personDir)

    for hand in hands:
        handDir = personDir + hand + " / " 
        mDir + = [handDir]
        mg = os.listdir (handDir)
        mg.sort()
        imgs = imgs + [handDir + s.split(".")[0] for s in mg if not s.split(".")[0] == "Thumbs"]

p0Imgs = [i.replace( ' db ' , ' gab_roi_db ' ) for i in imgs]                          # p0Imgs is the path of each file, mDir is the path that needs to be created to store all the preprocessed images 
mDir = [i.replace( ' db ' , ' gab_roi_db ' ) for i in mDir]

#To determine whether the path exists or not, create a path 
for path in mDir:
     if  not os.path.exists(path):
        os.makedirs(path)

filters = build_filters()
for index, imgPath  in enumerate(imgs):
    img = cv2.imread(imgPath + ".bmp", 0)
    res0 = process(getRoi(img), filters) #Obtain                                         ROI for histogram equalization and cut after gabor filtering 
    cv2.imwrite(p0Imgs[index] + " .png " , res0)
     print index


cv2.waitKey(0)
cv2.destroyAllWindows()

Ok, now look at the picture of the finger vein after treatment:

It looks pretty good. After preprocessing, you can extract texture features and put them into files for pattern matching and finger vein recognition. If you are interested, look forward to the next blog.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325263826&siteId=291194637