a demo about Eye-tracking

Intoduction

Eye-Tracking data is a very useful source of information to study cognition and especially language comprehension in humans. How to track the eye for the futher research?This demo has designed and implemented a simple program using opencv and dlib to satisfy the demand of eye-tracking.Meanwhile it also put great focus to real-time detection. In order to meet the former demands,I use the pre-trained opencv haar to cascade eye-detectors, and apply them to the detection of real-time video streams.Haar cascades is especially effective when working in resource constrained devices or whens more expensive computational object detectors cannot be used.The codes are as follows.

# import
import cv2
import numpy as np
import dlib


# load the pre-trained moudle
eye = cv2.CascadeClassifier('model/haarcascade_eye.xml')

#Load the feature point model
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('model/shape_predictor_68_face_landmarks.dat')

# open cam
capture=cv2.VideoCapture(0)

#Set the initial value for the human eye clipping region
minY, maxY, minX, maxX = 0,0,0,0

# Get real-time pic
while True:
    ret,frame = capture.read()
    # Converted gray image
    gary = cv2.cvtColor(frame,cv2.COLOR_RGB2GRAY)

    # Number of faces :rects
    rects = detector(gary, 0)
    for i in range(len(rects)):
        landmarks = np.matrix([[p.x, p.y] for p in predictor(frame, rects[i]).parts()])
        for idx, point in enumerate(landmarks):

            # Coordinates of 68 points
            pos = (point[0, 0], point[0, 1])
            # print(idx, pos)

            #Human eye feature point coordinates
            if idx == 37:
                minX = pos[0]-10

            if idx ==46:
                maxX = pos[0]+10

            if idx == 38:
                minY = pos[1]-10

            if idx == 47:
                maxY = pos[1]+10

            # Use CV2 Circle draw a circle for each feature point, a total of 68
            cv2.circle(frame, pos, 2, color=(0, 255, 0))
           
            #裁剪人眼区域
            if minX>0 and maxY>0 and minY>0 and maxX>0:
                print(minY, maxY, minX, maxX)
                eye = frame[minY:maxY,minX:maxX]

    #Feature point display
    cv2.namedWindow("img")
    cv2.imshow("img", eye)
    if cv2.waitKey(5) & 0xFF == ord('q'):
        break
# release the resource
capture.release()
# close
cv2.destroyAllWindows()

参考

你可能感兴趣的:(note,计算机视觉)