人脸识别-常用的数据库Face Databases From Other Research Groups

Face Databases From Other Research Groups

 

We list some face databases widely used for face related studies, and summarize the specifications of these databases as below.

 

1. Caltech Occluded Face in the Wild (COFW).  

o       Source: The COFW face dataset is built by California Institute of Technology,

o       Purpose: COFW face dataset contains images with severe facial occlusion. The images are collected from the internet.  

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

1345 images in the training set and 507 images in the testing set.

Static/Videos

Static images.

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

Various expressions.

Illumination

Various illuminations

3D data

-

Ground truth

29 facial landmark and landmark occlusion annotations

o       Reference: refer to the paper: X. P. Burgos-Artizzu, P. Perona and P. Doll�r, "Robust face landmark estimation under occlusion", ICCV 2013, Sydney, Australia, December 2013.

 

2. Ibug 300 Faces In-the-Wild (ibug 300W) Challenge database.  

o       Source: The ibug 300W face dataset is built by the Intelligent Behavior Understanding Group (ibug) at Imperial College London,

o       Purpose: The ibug 300W face dataset contains ''in-the-wild'' images collected from the internet.  

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

About 4000+ images.

Static/Videos

Static images.

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

Various expressions.

Illumination

Various illuminations

3D data

-

Ground truth

68 facial landmark annotations

o       Reference: refer to the paper: C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic, ''300 faces In-the-wild challenge: Database and results'', Image and Vision Computing (IMAVIS), 2016.

 

3. Ibug 300 Videos in the Wild (ibug 300-VW) Challenge dataset.  

o       Source: The ibug 300VW face dataset is built by the Intelligent Behavior Understanding Group (ibug) at Imperial College London,

o       Purpose: The ibug 300VW face dataset contains ''in-the-wild'' videos collected from the internet.  

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

114 videos.

Static/Videos

Videos.

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

Various expressions.

Illumination

Various illuminations

3D data

-

Ground truth

68 facial landmark annotations

o       Reference: refer to the paper: J.Shen, S.Zafeiriou, G. S. Chrysos, J.Kossaifi, G.Tzimiropoulos, and M. Pantic. The first facial landmark tracking in-the-wild challenge: Benchmark and results. In IEEE International Conference on Computer Vision Workshops (ICCVW), 2015.

 

4. 3D Face Alignment in the Wild (3DFAW) Challenge dataset.  

o       Source: The 3DFAW dataset is built by Organizers of the 3DFAW challenge,

o       Purpose: The 3DFAW face dataset contains real and synthetic facial images with 3D facial landmark annotations.  

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

10K+ images.

Static/Videos

Static images.

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

Various expressions.

Illumination

Various illuminations

3D data

3D facial landmark annotations

Ground truth

66 3D facial landmark annotations

o       Reference: refer to the website: http://mhug.disi.unitn.it/workshop/3dfaw/.

 

5. Binghamton University facial expression databases.  

o       Source: The Binghamton University facial expression databases are built by Dr. Lijun Yin at Binghamton University and other collaborators,

o       Purpose: The Binghamton University facial expression databases record images or videos of subjects with various facial expressions. There are multiple types of subsets. Some subsets contain 4D facial data. Some subsets contain multi-modality facial data.  

o       Properties:

Properties

Descriptions

# of subjects

Number of subjects varies with different data subsets.

# of images/videos

-

Static/Videos

Static and videos.

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

-

Facial expression

Various expressions.

Illumination

-

3D data

3D face scann

Ground truth

Facial expression and facial action unit annotations. Some data subsets contain tracked facial landmark locations.

o       Reference: refer to the website: http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

 

6. Gaze Interaction For Everybody (GI4E) dataset.  

o       Source: The GI4E dataset is built by GI4E group,

o       Purpose: The GI4E dataset contains facial videos with continuous head pose annotations.  

o       Properties:

Properties

Descriptions

# of subjects

10

# of images/videos

120 videos.

Static/Videos

Videos.

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

-

Illumination

-

3D data

-

Ground truth

head pose annotations

o       Reference: refer to the paper: Mikel Ariz, Jos� J. Bengoechea, Arantxa Villanueva, Rafael Cabeza, A novel 2D/3D database with automatic face annotation for head tracking and pose estimation, Computer Vision and Image Understanding, Volume 148, July 2016, Pages 201-210

 

7. Boston University (BU) head tracking dataset.  

o       Source: The BU head tracking dataset is built by the image and video computing group at Boston University,

o       Purpose: The BU head tracking dataset contains facial videos with continuous head pose annotations.  

o       Properties:

Properties

Descriptions

# of subjects

7

# of images/videos

70+ videos

Static/Videos

Videos

Single/Multiple faces

Single

Gray/Color

color

Resolution

320*240

Face pose

Various poses

Facial expression

-

Illumination

Uniform and varying lighting subsets

3D data

-

Ground truth

Continuous head pose annotations

o       Reference: refer to the paper: M. La Cascia, S. Sclaroff, and V. Athitsos, "Fast, Reliable Head Tracking under Varying Illumination: An Approach Based on Robust Registration of Texture-Mapped 3D Models", IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 22(4), April, 2000.

 

8. Acted Facial Expressions in the Wild (AFEW) and Static Facial Expressions in the Wild (SFEW) databases.  

o       Source: The AFEW and SFEW databases are built by Australian National University, University of Canberra, and Commonwealth Scientific and Industrial Research Organisation, Australia ,

o       Purpose: Acted Facial Expressions In The Wild (AFEW) is a dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies. Static Facial Expressions in the Wild (SFEW) has been developed by selecting frames from AFEW.  

o       Properties:

Properties

Descriptions

# of subjects

330

# of images/videos

1426 video sequences in AFEW database. 700 images in SFEW database (SPI category).

Static/Videos

Videos in AFEW, Static images in SFEW.

Single/Multiple faces

Multiple

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

Angry, Disgust, Fear, Happy, Neutral, Sad, Surprise.

Illumination

Various illuminations

3D data

coarse head pose label

Ground truth

5 facial landmark annotations for some images

o       Reference: refer to the paper: Abhinav Dhall, Roland Goecke, Simon Lucey, Tom Gedeon, Collecting Large, "Richly Annotated Facial-Expression Databases from Movies", IEEE Multimedia 2012. Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon, "Static Facial Expressions in Tough Conditions: Data, Evaluation Protocol And Benchmark", First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies BeFIT, IEEE International Conference on Computer Vision ICCV2011, Barcelona, Spain, 6-13 November 2011.

 

9. LFW (Labeled Faces in the Wild) Database  

o       Source: The LFW is built by University of Massachusetts, Amherst,

o       Purpose: LFW is a database of face photographs designed for studying the problem of unconstrained face recognition.  Variation in clothing, pose, background, and other variables is large in LFW. 

o       Properties:

Properties

Descriptions

# of subjects

5749

# of images/videos

13,233

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

color

Resolution

250*250

Face pose

Various poses

Facial expression

Various expressions

Illumination

Various illuminations

3D data

N/A

Ground truth

Identifications of subjects

o       Reference: refer to the paper: Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller, "Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments",
University of Massachusetts, Amherst, Technical Report 07-49, October, 2007.

 

10. Annotated Facial Landmarks in the Wild (AFLW) database  

o       Source: The AFLW is built by Graz University of Technology ,

o       Purpose: Annotated Facial Landmarks in the Wild (AFLW) provides a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions.  

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

25,993

Static/Videos

Static

Single/Multiple faces

Multiple

Gray/Color

color

Resolution

High resolution

Face pose

Various poses

Facial expression

Various expressions

Illumination

Various illuminations

3D data

coarse head pose estimation

Ground truth

21 point markup

o       Reference: refer to the paper: Martin Koestinger, Paul Wohlhart, Peter M. Roth, and Horst Bischof, "Annotated Facial Landmarks in the Wild: A Large-scale, Real-world Database for Facial Landmark Localization", In First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies, 2011.

 

11. Labeled Face Parts in the Wild (LFPW) Dataset  

o       Source: The LFPW database is built by Kriegman-Belhumeur Vision Technologies, LLC.

o       Purpose: LFPW was used to evaluate a face part (facial fiducial point) detection method. Release 1 of LFPW consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTURK workers, and 29 fiducial points are included in dataset.  

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

1432

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

color

Resolution

-

Face pose

Various poses

Facial expression

Various expressions

Illumination

Various illuminations

3D data

N/A

Ground truth

Annotated 29 fiducial points

o       Reference: refer to the paper : Peter N. Belhumeur, David W. Jacobs, David J. Kriegman, and Neeraj Kumar, �Localizing Parts of Faces Using a Consensus of Exemplars,� Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Additional annotations can be found here: "http://ibug.doc.ic.ac.uk/resources".

 

12. Helen dataset  

o       Source: The Helen database is built by The Image Formation & Processing (IFP) Group at the University of Illinois, people from Adobe Systems Inc. and Facebook Inc.

o       Purpose: Helen database provides a large-scale collection of annotated facial images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions.. 

o       Properties:

Properties

Descriptions

# of subjects

-

# of images/videos

2000 training and 330 testing

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

color

Resolution

High resolution

Face pose

Various poses

Facial expression

Various expressions

Illumination

Various illuminations

3D data

N/A

Ground truth

Annotated 194 facial landmarks

o       Reference: refer to the paper:Vuong Le, Jonathan Brandt, Zhe Lin, Lubomir Boudev, and Thomas S. Huang, �Interactive Facial Feature Localization�, ECCV2012. Additional annotations can be found here: "http://ibug.doc.ic.ac.uk/resources".

 

 

13. The Facial Recognition Technology (FERET) Database

o       Source: the FERET database is sponsored by the Defense Advanced Research Products Agency (DARPA).

o       Purpose: the FERET database is widely used as the standard face database to evaluate the face recognition systems. It may also be used for face pose estimation and eye detection.

o       Properties:

Properties

Descriptions

# of subjects

1199

# of images/videos

14051

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

eight-bit gray

Resolution

256*384

Face pose

7 categories:

Frontal, quarter-left, quarter-right, half-left, half-right, full-left, full-right

Facial expression

Slight facial expression changes

Illumination

Controlled illumination

3D data

N/A

Ground truth

Positions of eyes, nose, and mouth

Identifications of subjects

o       Reference: refer to the paper �P. J. Phillips, Hyeonjoon Moon, S. A. Rizvi, and P. J. Rauss, The FERET evaluation methodology for face recognition algorithm, IEEE Trans. on PAMI, vol. 22, no. 10, pp. 1090-1104, October 2000� and the online document �http://www.itl.nist.gov/iad/humanid/feret/feret_master.html�.

 

14. Face Recognition Grand Challenge (FRGC) Database

o       Source: the FRGC database is jointly sponsored by several government agencies interested in improving the capabilities of face recognition technology.

o       Purpose: the primary goal of the FRGC database is to evaluate face recognition technology. It may also be used for eye detection.

o       Properties:

Properties

Descriptions

# of subjects

222 (large still training set)

466 (validation set)

# of images/videos

12,776 (large still training set)

943 *8 (3D training set)

4007 *8 (validation set)

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Color

Resolution

1704*2272 or 1200*1600

Face pose

Frontal view

Facial expression

Neutral and smiling

Illumination

Controlled and uncontrolled illumination

3D data

Yes (range and texture)

Ground truth

Positions of eyes, nose, and mouth

Identifications of subjects

o       Reference: Please refer to the paper �P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. WorekOverview of the face recognition grand challenge, Proc. of CVPR05, no. 1, pp. 947�954, June 2005� and the original document under the directory �BEE_DIST\doc�.

 

  1. CAS-PEAL Face Database

o       Source: CAS-PEAL database is obtained from Chinese Academy of Science.

o       Purpose: CAS-PEAL database is used to evaluate the face recognition systems. It may also be used for eye detection, face pose estimation, and facial expression recognition.

o       Properties:

Properties

Descriptions

# of subjects

1040 (595 males and 445 females) of Asians

# of images/videos

30,900

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

eight-bit gray

Resolution

360*480

Face pose

21 pose angles

vertical: up, middle, and down

horizontal: left to right (67�, 45�, 22�, 0�, -22�, -45�, -67�)

Facial expression

6 facial expressions:

neutral, eye closing, frown, smile, surprise,  and mouth open

Illumination

15 lighting conditions

Accessories

3 kinds of glasses and 3 kinds of caps

3D data

N/A

Ground truth

Positions of eyes

Identifications of subjects

Face pose angles

Facial expression labels

Illumination positions

o       Reference: Please refer to the technical report JDL-TR-04-FR-001 �The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations�.

 

  1. CMU Pose, Illumination, and Expression (PIE) Database

o       Source: The PIE database is obtained from the Robotics Institute of Carnegie Mellon University.

o       Purpose: PIE database is used to evaluate the face recognition systems. It may also be used for facial feature detection, face pose estimation, and facial expression recognition.

o       Properties:

Properties

Descriptions

# of subjects

68

# of images/videos

41,368

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Color

Resolution

640*486

Face pose

13 pose angles in vertical and horizontal

Facial expression

4 facial expressions:

neutral, eye closing, smiling, and talking

Illumination

N/A

Accessories

Glasses

3D data

N/A

Ground truth

Some feature point data

Identifications of subjects

Measured locations of camera

Head pose

Facial expression labels

Illumination positions

Additional materials

Background images

o       Reference: Please refer to the CMU Technical Report CMU-RI-TR-01-02 �The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces�.

 

  1. CMU Face Database (Frontal and Profile)

o       Source: this database is obtained from the Robotics Institute of Carnegie Mellon University. It combines images collected at CMU and MIT.

o       Purpose: this database is primarily used for face detection task. It may also be used for eye detection and facial feature detection.

o       Properties:

Properties

Descriptions

# of subjects

N/A

# of images/videos

169 frontal-view face images

202 profile face images

Static/Videos

Static

Single/Multiple faces

Multiple

Gray/Color

eight-bit gray

Resolution

N/A

Face pose

Frontal and profile

Facial expression

Various facial expressions

Illumination

Various lighting conditions

Accessories

Various

3D data

N/A

Ground truth

Positions of eyes, nose tip, mouth corners, and mouth center for each face (frontal-view face);

Positions of eye corner, eye, nose, nose tip, mouth corner, mouth center, chin, earlobe, and ear tip for each face (profile face)

 

  1. Yale Face Database

o       Source: this database is constructed by Yale University.

o       Purpose: this database can be used for face recognition and facial expression recognition.

o       Properties:

Properties

Descriptions

# of subjects

15

# of images/videos

165

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

eight-bit gray

Resolution

320*243

Face pose

Frontal view

Facial expression

6 facial expressions:

neutral, happiness, sadness, sleepiness, surprise, and wink

Illumination

3 lighting conditions: center-light, left-light, and right-light

Accessories

Glasses

3D data

N/A

Ground truth

Identifications of subjects

Facial expression labels

Illumination positions

o       Reference: Please refer to the paper �P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection, IEEE Trans. on PAMI, vol. 19, no. 7, pp. 711-720, July 1997�.

 

  1. Yale Face Database B

o       Source: this database is constructed by Yale University.

o       Purpose: this database can be used for face recognition, face pose estimation, and eye detection.

o       Properties:

Properties

Descriptions

# of subjects

10

# of images/videos

5760

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Gray

Resolution

640*480 (eye distance ~ 90pixels)

Face pose

9 poses

Facial expression

Neutral

Illumination

64 lighting conditions and 1 ambient illumination

Accessories

N/A

3D data

N/A

Ground truth

Identifications of subjects

Face pose

Illumination positions

Coordinates of eyes and mouth (frontal view) Coordinates of face center (other views)

o        Reference: Please refer to the paper �A. S. Georghiades and P. N. Belhumeur, From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose, IEEE Trans. on. PAMI, vol.23, no.6, pp.643-660, June 2001�.

 

  1. Georgia Tech Face Database

o       Source: this database is constructed by Georgia Institute of Technology.

o       Purpose: this database is primarily used for face recognition. It may also be used for face detection.

o       Properties:

Properties

Descriptions

# of subjects

50

# of images/videos

750

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

color

Resolution

640*480

Face pose

Nearly frontal-view or quarter-profile images

Facial expression

Various

Illumination

Various

Accessories

Glasses

3D data

N/A

Ground truth

Identifications of subjects

Coordinates of left-upper corner and right-bottom corner of face rectangle

 

  1. AR_Face_Database

o       Source: this database is constructed by Aleix Martinez and Robert Benavente in the Computer Vision Center (CVC) at the UAB.

o       Purpose: this database is primarily used for face recognition. It may also be used for facial expression recognition.

o       Properties:

Properties

Descriptions

# of subjects

126 (70 male and 56 female)

# of images/videos

4000

Static/Videos

Static images and image sequences

Single/Multiple faces

Single

Gray/Color

color

Resolution

768*576

Face pose

Nearly frontal-view or quarter-profile images

Facial expression

4 facial expressions:

neutral, smile, anger, and scream

Illumination

3 illumination conditions:

left, right, and all side lights on

Accessories

Sun glasses, scarf

3D data

N/A

Ground truth

Identifications of subjects

Facial expression labels

o       Reference: Please refer to the technical report �A. M. Martinez and R. BenaventeThe AR Face Database, CVC Technical Report #24, June 1998�.

 

  1. UMIST_Face_Database

o       Source: this database was constructed by the University of Manchester Institute of Science and Technology that merged with the Victoria University of Manchester to form the University of Manchester.

o       Purpose: this database is primarily used for face recognition.

o       Properties:

Properties

Descriptions

# of subjects

20

# of images/videos

564

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Eight-bit gray

Resolution

92*112

Face pose

From profile to frontal views

Facial expression

neutral

Illumination

N/A

Accessories

Glasses

3D data

N/A

Ground truth

Cropped face region

Identifications of subjects

o       Reference: Please refer to the paper �Daniel B Graham and Nigel M AllinsonCharacterizing Virtual Eigensignatures for General Purpose Face Recognition, Face Recognition: From Theory to Applications, NATO ASI Series F, Computer and Systems Sciences, vol. 163, 
H. Wechsler, P. J. Phillips, V. Bruce, F. Fogelman-Soulie and T. S. Huang (eds), pp. 446-456, 1998�.

 

  1. ORL Database of Faces

o       Source: this database is constructed by AT&T Laboratories Cambridge.

o       Purpose: this database is primarily used for face recognition.

o       Properties:

Properties

Descriptions

# of subjects

40

# of images/videos

400

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Eight-bit gray

Resolution

92*112

Face pose

Moderate pose variation (up and down, quarter-profile to frontal-view)

Facial expression

3 facial expressions: neutral, smiling, closed eye

Illumination

N/A

Accessories

Glasses

3D data

N/A

Ground truth

Cropped face region

Identifications of subjects

o       Reference: Please refer to the paper �F. S. Samaria and A. C. Harter �Parameterisation of a stochastic model for human face identification�, Proc. of 2nd IEEE workshop on Applications of Computer Vision, pp. 138-142, 1994�.

 

  1. MIT CBCL Face Database #1

o       Source: this database is constructed by Center for Biological and Computational Learning at MIT.

o       Purpose: this database is primarily used for face detection.

o       Properties:

Properties

Descriptions

# of subjects

N/A

# of images/videos

Training set: 2,429 faces and 4,548 non-faces

Test set: 472 faces and 23,573 non-faces.

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Eight-bit gray

Resolution

19*19

Face pose

Moderate pose variation

Facial expression

Moderate facial expression changes

Illumination

Moderate illumination variation

Accessories

Glasses

3D data

N/A

Ground truth

Cropped face region

 

  1. Pointing Head Pose Image Database

o       Source: this database is obtained from http://www-prima.inrialpes.fr/Pointing04/data-face.html .

o       Purpose: this database is primarily used for face pose estimation task. It may also be used for face recognition.

o       Properties:

Properties

Descriptions

# of subjects

15

# of images/videos

2790

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Color

Resolution

384*288

Face pose

Vertical: -90, -60, -30, -15, 0, +15, +30, +60 +90

Horizontal: -90, -75, -60, -45, -30, -15, 0, +15, +30, +45, +60, +75, +90

Facial expression

Neutral

Illumination

N/A

Accessories

Glasses

3D data

N/A

Ground truth

Identifications of subjects

Face pose angles

o       Reference: Please refer to the paper �N. Gourier, D. Hall, and J. L. Crowley, �Estimating Face Orientation from Robust Detection of Salient Facial Features,� Proc. of Pointing 2004, ICPR, International Workshop on Visual Observation of Deictic Gestures�.

 

  1. Spacetime Face Database

o       Source: this database is constructed by the University of Washington Graphics and Imaging Laboratory.

o       Purpose: this database is primarily used for face modeling and animation.

o       Properties: 384 face meshes, each with about 23K vertices.

o       Reference: Please refer to the paper �Li Zhang, Noah Snavely, Brian Curless, and Steve Seitz, �Spacetime Faces: High-resolution capture for modeling and animation,� Proc. of ACM SIGGRAPH2004�.

 

  1. BioID Database

o       Source: this database is constructed by HumanScan company.

o       Purpose: this database can be used for face detection, face recognition and eye detection.

o       Properties:

Properties

Descriptions

# of subjects

23

# of images/videos

1521

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Gray

Resolution

382*288 (eye distance ~ 50pixels)

Face pose

Frontal

Facial expression

Various

Illumination

Various lighting conditions

Accessories

Various

3D data

N/A

Ground truth

Coordinates of eyes.

Coordinates of 20 feature points (Eyebrow corners, eye, mouth and tip of chin)

�        Reference: Please refer to the paper �O. Jesorsky, K. Kirchberg, R. Frischholz, Robust Face Detection Using the Hausdorff Distance, Audio and Video based Person Authentication - AVBPA 2001, pages 90-95. Springer, 2001.�.


  1. CVL Face Database

o       Source: this database is constructed by Peter Peer, Computer Vision Laboratory of the University of Ljubljana.

o       Purpose: this database can be used for face detection, feature detection, face recognition and 3D face modeling.

o       Properties:

Properties

Descriptions

# of subjects

114

# of images/videos

798

Static/Videos

Static

Single/Multiple faces

Single

Gray/Color

Color

Resolution

640*480

Face pose

Horizontal: -90, -45, 0, 45, 90

Facial expression

3 facial expressions: serious, smiling( showing teeth and showing no teeth)

Illumination

N/A

Accessories

N/A

3D data

N/A

Ground truth

N/A

�        Reference: Please contact Peter Peer ([email protected]) and Computer Vision Laboratory of the University of Ljubljana for the database.



  1. NIST Mugshot Identification Database (MID)

o       Source: this database is constructed by the National  Institue of Standards and Technology.

o    Purpose: this database is primarily for use in development and testing of automated mugshot identification systems.

o       Properties:

Properties

Descriptions

# of subjects

1573 (1495 male and  78 female)

# of images/videos

3248

Static/Videos

Static

Single/Multiple faces

Single?

Gray/Color

Gray

Resolution

Varying in size from 1" - 21/2" in height

Face pose

front and profile

Facial expression

N/A

Illumination

N/A

Accessories

N/A

3D data

N/A

Ground truth

Subject identity


  1. The University of Oulu Physics-Based Face Database

o       Source: this database is collected at the Machine Vision and Media Processing Unit, University of Oulu.

o       Purpose: this database can be used for research in face recognition and color.

o       Properties:

Properties

Descriptions

# of subjects

125

# of images/videos

N/A (each person has 16 frontal views and an additional 16 if the person has glasses)

Static/Video

Static

Single/Multiple faces

Single

Gray/Color

Color

Resolution

428*569

Face pose

Minor various poses

Facial expression

N/A

Illumination

Four illuminants: Horizon, Incandescent, Fluorescent and Daylight.

Accessories

Glasses

3D data

N/A

Ground truth

N/A

�        Reference: Please refer to the papsers "A physics-based face database for color research, Journal of Electronic Imaging Vol. 9 No. 1 pp.  32-38." and "Color correction of face images under different illuminants by RGB eigenfaces,  Proc. 2nd Audio- and Video-Based Biometric Person Authentication Conference (AVBPA99), March 22-23, Washington DC USA pp. 148-153."


  1. UCD Color Face Image (UCFI) Database

o       Source: this database is constructed by the Department of Electronic & Electrical Engineering, University College Dublin. The images are required form a wide variety of sources such as digital cameras, pictures scanned using photo-scanner, other face databases and the world wide web.

o       Purpose: this database is primarily to test new face detection algorithms using color information.

o       Properties:

Properties

Descriptions

# of subjects

N/A

# of images/videos

299

Static/Videos

Static

Single/Multiple faces

Single?

Gray/Color

Color

Resolution

Vaious

Face pose

Frontal, profile, intermediate, upright and rotated

Facial expression

Various

Illumination

Various

Accessories

Glasses, sunglasses, beards, moustaches

3D data

N/A

Ground truth

Hand segmantation


 

你可能感兴趣的:(人脸识别-常用的数据库Face Databases From Other Research Groups)