十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
这篇文章将为大家详细讲解有关Python如何实现基于Dlib的人脸识别系统,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
专注于为中小企业提供成都网站制作、做网站服务,电脑端+手机端+微信端的三站合一,更高效的管理,为中小企业上城免费做网站提供优质的服务。我们立足成都,凝聚了一批互联网行业人才,有力地推动了近1000家企业的稳健成长,帮助中小企业通过网站建设实现规模扩充和转变。python有哪些常用库python常用的库:1.requesuts;2.scrapy;3.pillow;4.twisted;5.numpy;6.matplotlib;7.pygama;8.ipyhton等。
人脸识别系统的实现流程与之前是一样的,只是这里我们借助了dlib和face_recognition这两个库来实现。face_recognition是对dlib库的包装,使对dlib的使用更方便。所以首先要安装这2个库。
pip3 install dlib pip3 install face_recognition
然后,还要安装imutils库
pip3 install imutils
我们看一下项目的目录结构:
. ├── dataset │ ├── alan_grant [22 entries exceeds filelimit, not opening dir] │ ├── claire_dearing [53 entries exceeds filelimit, not opening dir] │ ├── ellie_sattler [31 entries exceeds filelimit, not opening dir] │ ├── ian_malcolm [41 entries exceeds filelimit, not opening dir] │ ├── john_hammond [36 entries exceeds filelimit, not opening dir] │ └── owen_grady [35 entries exceeds filelimit, not opening dir] ├── examples │ ├── example_01.png │ ├── example_02.png │ └── example_03.png ├── output │ ├── lunch_scene_output.avi │ └── webcam_face_recognition_output.avi ├── videos │ └── lunch_scene.mp4 ├── encode_faces.py ├── encodings.pickle ├── recognize_faces_image.py ├── recognize_faces_video_file.py ├── recognize_faces_video.py └── search_bing_api.py 10 directories, 12 files
首先,提取128维的人脸嵌入:
命令如下:
python3 encode_faces.py --dataset dataset --encodings encodings.pickle -d hog
记住:如果你的电脑内存不够大,请使用hog模型进行人脸检测,如果内存够大,可以使用cnn神经网络进行人脸检测。
看代码:
# USAGE # python encode_faces.py --dataset dataset --encodings encodings.pickle # import the necessary packages from imutils import paths import face_recognition import argparse import pickle import cv2 import os # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--dataset", required=True, help="path to input directory of faces + images") ap.add_argument("-e", "--encodings", required=True, help="path to serialized db of facial encodings") ap.add_argument("-d", "--detection-method", type=str, default="hog", help="face detection model to use: either `hog` or `cnn`") args = vars(ap.parse_args()) # grab the paths to the input images in our dataset print("[INFO] quantifying faces...") imagePaths = list(paths.list_images(args["dataset"])) # initialize the list of known encodings and known names knownEncodings = [] knownNames = [] # loop over the image paths for (i, imagePath) in enumerate(imagePaths): # extract the person name from the image path print("[INFO] processing image {}/{}".format(i + 1, len(imagePaths))) name = imagePath.split(os.path.sep)[-2] # load the input image and convert it from RGB (OpenCV ordering) # to dlib ordering (RGB) image = cv2.imread(imagePath) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # detect the (x, y)-coordinates of the bounding boxes # corresponding to each face in the input image boxes = face_recognition.face_locations(rgb, model=args["detection_method"]) # compute the facial embedding for the face encodings = face_recognition.face_encodings(rgb, boxes) # loop over the encodings for encoding in encodings: # add each encoding + name to our set of known names and # encodings knownEncodings.append(encoding) knownNames.append(name) # dump the facial encodings + names to disk print("[INFO] serializing encodings...") data = {"encodings": knownEncodings, "names": knownNames} f = open(args["encodings"], "wb") f.write(pickle.dumps(data)) f.close()
输出结果是每张图片输出一个人脸的128维的向量和对于的名字,并序列化到硬盘,供后续人脸识别使用。
识别图像中的人脸:
这里使用KNN方法实现最终的人脸识别,而不是使用SVM进行训练。
命令如下:
python3 recognize_faces_image.py --encodings encodings.pickle --image examples/example_01.png
看代码:
# USAGE # python recognize_faces_image.py --encodings encodings.pickle --image examples/example_01.png # import the necessary packages import face_recognition import argparse import pickle import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--encodings", required=True, help="path to serialized db of facial encodings") ap.add_argument("-i", "--image", required=True, help="path to input image") ap.add_argument("-d", "--detection-method", type=str, default="cnn", help="face detection model to use: either `hog` or `cnn`") args = vars(ap.parse_args()) # load the known faces and embeddings print("[INFO] loading encodings...") data = pickle.loads(open(args["encodings"], "rb").read()) # load the input image and convert it from BGR to RGB image = cv2.imread(args["image"]) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # detect the (x, y)-coordinates of the bounding boxes corresponding # to each face in the input image, then compute the facial embeddings # for each face print("[INFO] recognizing faces...") boxes = face_recognition.face_locations(rgb, model=args["detection_method"]) encodings = face_recognition.face_encodings(rgb, boxes) # initialize the list of names for each face detected names = [] # loop over the facial embeddings for encoding in encodings: # attempt to match each face in the input image to our known # encodings matches = face_recognition.compare_faces(data["encodings"], encoding) name = "Unknown" # check to see if we have found a match if True in matches: # find the indexes of all matched faces then initialize a # dictionary to count the total number of times each face # was matched matchedIdxs = [i for (i, b) in enumerate(matches) if b] counts = {} # loop over the matched indexes and maintain a count for # each recognized face face for i in matchedIdxs: name = data["names"][i] counts[name] = counts.get(name, 0) + 1 # determine the recognized face with the largest number of # votes (note: in the event of an unlikely tie Python will # select first entry in the dictionary) name = max(counts, key=counts.get) # update the list of names names.append(name) # loop over the recognized faces for ((top, right, bottom, left), name) in zip(boxes, names): # draw the predicted face name on the image cv2.rectangle(image, (left, top), (right, bottom), (0, 255, 0), 2) y = top - 15 if top - 15 > 15 else top + 15 cv2.putText(image, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 2) # show the output image cv2.imshow("Image", image) cv2.waitKey(0)
实际效果如下:
关于“Python如何实现基于Dlib的人脸识别系统”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。