Cover
Handbook of Face Recognition, 2nd Edition
ISBN 9780857299314
Preface
Contents
Contributors
Chapter 1: Introduction
1.1 Face Recognition
1.2 Categorization
1.3 Processing Workflow
1.4 Face Subspace
1.5 Technology Challenges
Large Variability in Facial Appearance
Complex Nonlinear Manifolds
High Dimensionality and Small Sample Size
1.6 Solution Strategies
1.7 Current Status
1.8 Summary
References
Part I: Face Image Modeling and Representation
Chapter 2: Face Recognition in Subspaces
2.1 Introduction
2.2 Face Space and Its Dimensionality
2.2.1 Image Space Versus Face Space
2.2.2 Principal Manifold and Basis Functions
2.2.3 Principal Component Analysis
2.2.4 Eigenspectrum and Dimensionality
2.3 Linear Subspaces
2.3.1 Eigenfaces and Related Techniques
2.3.2 Probabilistic Eigenspaces
2.3.3 Linear Discriminants: Fisherfaces
2.3.4 Bayesian Methods
2.3.5 Independent Component Analysis and Source Separation
2.3.6 Multilinear SVD: "Tensorfaces"
2.4 Nonlinear Subspaces
2.4.1 Principal Curves and Nonlinear PCA
2.4.2 Kernel-PCA and Kernel-Fisher Methods
2.5 Empirical Comparison of Subspace Methods
2.5.1 PCA-Based Recognition
2.5.2 ICA-Based Recognition
2.5.3 KPCA-Based Recognition
2.5.4 MAP-Based Recognition
2.5.5 Compactness of Manifolds
2.5.6 Discussion
2.6 Methodology and Usage
2.6.1 Multiple View-Based Approach for Pose
2.6.2 Modular Recognition
2.6.3 Recognition with Sets
2.7 Conclusions
References
Chapter 3: Face Subspace Learning
3.1 Introduction
3.2 Subspace Learning-A Global Perspective
3.2.1 General Mean Criteria
3.2.2 Max-Min Distance Analysis
3.2.3 Empirical Evaluation
3.2.4 Related Works
3.3 Subspace Learning-A Local Perspective
3.3.1 Patch Alignment Framework
3.3.2 Discriminative Locality Alignment
3.3.3 Manifold Elastic Net
3.3.4 Related Works
3.4 Transfer Subspace Learning
3.4.1 TSL Framework
3.4.2 Cross Domain Face Recognition
References
Chapter 4: Local Representation of Facial Features
4.1 Introduction
4.1.1 Structure and Scope of the Chapter
4.2 Review of Facial Feature Representations
4.3 Local Binary Patterns
4.3.1 Local Binary Patterns
4.3.1.1 LBP in the Spatial Domain
4.3.1.2 Spatiotemporal LBP
4.3.1.3 Multi-Scale LBP
4.3.2 Face Description Using LBP
4.3.2.1 Description of Static Face Images
4.3.2.2 Description of Face Sequences
4.3.3 Face Recognition Using LBP Descriptors
4.3.4 LBP in Other Face-Related Problems
4.4 Gabor Features
4.4.1 Introduction
4.4.2 Gabor Filter
4.4.3 Constructing Gabor Features
4.4.4 Learning Facial Features
4.4.5 Detecting Facial Features
Experiments Using the XM2VTS Face Database
4.5 Discussions on Local Features
4.6 Conclusions
References
Chapter 5: Face Alignment Models
5.1 Introduction
5.1.1 Statistical Models of Shape
5.1.1.1 Aligning Sets of Shapes
5.1.1.2 Linear Models of Shape Variation
5.1.1.3 Choosing the Number of Shape Modes
5.1.1.4 Fitting the Model to New Points
5.1.1.5 Further Reading
5.1.2 Statistical Models of Texture
5.1.2.1 Aligning Sets of Textures
5.1.2.2 Linear Models of Texture Variation
5.1.2.3 Choosing the Number of Texture Modes
5.1.2.4 Fitting the Model to New Textures
5.1.2.5 Further Reading
5.1.3 Combined Models of Appearance
5.1.3.1 Choosing Shape Parameter Weights
5.1.3.2 Separating Sources of Variability
5.2 Active Shape Models (ASMs)
5.2.1 Goodness of Fit
5.2.2 Iterative Model Refinement
5.2.3 Multi-Resolution Active Shape Models
5.2.4 Examples of ASM Search
5.2.5 Further Reading
5.3 Active Appearance Models (AAMs)
5.3.1 Goodness of Fit
5.3.2 Updating Model Parameters
5.3.2.1 Estimating R via Linear Regression
5.3.2.2 Estimating R via Gauss-Newton Approximation
5.3.3 Iterative Model Refinement
5.3.4 Multi-Resolution Active Appearance Models
5.3.5 Examples of AAM Search
5.3.6 Alternative Strategies
5.3.6.1 Shape AAM
5.3.6.2 Compositional Approach
5.3.7 Further Reading
5.4 Conclusions
References
Chapter 6: Morphable Models of Faces
6.1 Introduction
6.1.1 Three-Dimensional Representation
6.1.2 Correspondence-Based Representation
6.1.3 Face Statistics
6.2 3D Morphable Model Construction
6.2.1 3D Face Scanning
6.2.2 Registration
6.2.3 PCA Subspace
6.2.4 Regularized Morphable Model
6.2.4.1 Probabilistic PCA
6.2.5 Segmented Morphable Model
6.2.6 Identity/Expression Separated 3D Morphable Model
6.3 Morphable Model to Synthesize Images
6.3.1 Shape Projection
6.3.2 Illumination and Color Transformation
6.3.2.1 Ambient and Directed Light
6.3.2.2 Color Transformation
6.4 Image Analysis with a 3D Morphable Model
6.4.1 Maximum a Posteriori Estimation of the Parameters
6.4.2 Stochastic Newton Optimization
6.4.2.1 Fitting Results
6.4.3 Multiple Feature Fitting
6.5 Experimental Evaluation
6.5.1 Pose Variation
6.5.2 Pose and Illumination Variations
6.5.3 Identification Confidence
6.5.4 Virtual Views as an Aid to Standard Face Recognition Algorithms
6.5.5 Face Identification on 3D Scans
6.5.5.1 Results
UND
GavabDB
6.6 Conclusions
References
Chapter 7: Illumination Modeling for Face Recognition
7.1 Introduction
7.2 Background on Reflectance and Lighting
7.3 PCA Based Linear Lighting Models
7.4 Linear Lighting Models without Shadows
7.5 Nonlinear Models with Attached Shadows
7.6 Spherical Harmonic Representations
7.6.1 Spherical Harmonics and the Funk-Hecke Theorem
7.6.2 Properties of the Convolution Kernel
7.6.3 Approximating the Reflectance Function
7.6.4 Generating Harmonic Reflectances
7.6.5 From Reflectances to Images
7.7 Applications
7.7.1 Recognition
7.7.1.1 Linear Methods
7.7.1.2 Enforcing Nonnegative Light
7.7.1.3 Specularity
7.7.1.4 Experiments
7.7.2 Modeling
7.7.2.1 Photometric Stereo
7.7.2.2 Objects in Motion
7.7.2.3 Reconstruction with Shape Prior
7.8 Conclusions
References
Chapter 8: Face Recognition Across Pose and Illumination
8.1 Introduction
8.1.1 Multiview Face Recognition and Face Recognition Across Pose
8.1.2 Illumination Invariant Face Recognition
8.1.3 Algorithms for Face Recognition Across Pose and Illumination
8.2 Eigen Light-Fields
8.2.1 Light-Fields Theory
8.2.1.1 Object Light-Fields
8.2.1.2 Eigen Light-Fields
8.2.2 Application to Face Recognition Across Pose
8.2.2.1 Vectorization by Normalization
8.2.2.2 Classification Using Nearest Neighbor
8.2.2.3 Selecting the Gallery, Probe, and Generic Training Data
8.2.3 Experimental Results
8.2.3.1 Databases
8.2.3.2 Comparison with Other Algorithms
8.3 Bayesian Face Subregions
8.3.1 Face Subregions and Feature Representation
8.3.2 Modeling Local Appearance Change Across Pose
8.3.3 Experimental Results
8.3.3.1 Experiment 1: Unknown Probe Pose
8.3.3.2 Experiment 2: Known Probe Pose
8.4 Face Recognition Across Pose and Illumination
8.4.1 Fisher Light-Fields
8.4.1.1 Experimental Results
8.4.2 Illumination Invariant Bayesian Face Subregions
8.5 Conclusions
References
Chapter 9: Skin Color in Face Analysis
9.1 Introduction
9.2 Color Cue and Facial Image Analysis
9.3 Color Appearance for Color Cameras
9.3.1 Color Image Formation and Illumination
9.3.2 The Effect of White Balancing
9.3.2.1 Canonical Images and Colors
9.3.2.2 Non-canonical Images and Colors
9.4 Separating Sources of Skin Data
9.5 Modeling Skin Colors
9.5.1 Behavior of Skin Complexions at Different Color Spaces Under Varying Illumination
9.5.2 Color Spaces for Skin
9.5.3 Skin Color Model and Illumination
9.5.4 Mathematical Models for Skin Color
9.5.4.1 Video Sequences
9.6 Color Cue for Face Detection
9.7 Color Cue for Face Recognition
9.8 Conclusions
References
Chapter 10: Face Aging Modeling
10.1 Introduction
10.2 Preprocessing
10.2.1 2D Facial Feature Point Detection
10.2.1.1 FG-NET
10.2.1.2 MORPH
10.2.1.3 BROWNS
10.2.2 3D Model Fitting
10.3 Aging Pattern Modeling
10.3.1 Shape Aging Pattern
10.3.2 Texture Aging Pattern
10.3.3 Separate and Combined Shape & Texture Modeling
10.4 Aging Simulation
10.5 Experimental Results
10.5.1 Database
10.5.2 Face Recognition Tests
10.5.3 Effects of Different Cropping Methods
10.5.4 Effects of Different Strategies in Employing Shape and Texture
10.5.5 Effects of Different Filling Methods in Model Construction
10.6 Conclusions
References
Part II: Face Recognition Techniques
Chapter 11: Face Detection
11.1 Introduction
11.2 Appearance and Learning-Based Approaches
11.3 AdaBoost-Based Methods
11.3.1 Local Features
11.3.2 Learning Weak Classifiers
11.3.3 Learning Strong Classifiers Using AdaBoost
11.3.4 Alternative Feature Selection Methods
11.3.5 Asymmetric Learning Methods
11.3.6 Cascade of Strong Classifiers
11.4 Dealing with Head Rotations
11.4.1 Hierarchical Organization of Multi-view Faces
11.4.2 From Face-Pose Hierarchy to Detector-Pyramid
11.5 Postprocessing
11.6 Performance Evaluation
11.6.1 Performance Measures
11.6.2 Comparison of Cascade-Based Detectors
11.7 Conclusions
References
Chapter 12: Facial Landmark Localization
12.1 Introduction
12.2 Framework for Landmark Localization
12.3 Eye Localization
12.3.1 Midline of Eyes
12.3.2 Eye Candidate Detection
12.3.3 Eye Candidate Subsampling
12.3.4 Eye-Pair Classification
12.4 Random Forest Embedded ASM
12.4.1 Shape Modeling
12.4.2 Distance Measurement
12.4.3 Global Optimization
12.5 Experiments
12.5.1 Eye Localization
12.5.2 Random Forest Embedded ASM
12.6 Conclusions
References
Chapter 13: Face Tracking and Recognition in Video
13.1 Introduction
13.2 Utility of Video
Frame-Based Fusion
Ensemble Matching
Appearance Modeling
13.3 Still Gallery vs. Video Probes
13.3.1 Posterior Probability of Identity Variable
13.3.2 Sequential Importance Sampling Algorithm
13.3.3 Experimental Results
13.3.3.1 Results for Database-0
13.3.3.2 Results on Database-1
Case 1: Tracking and Recognition Using Laplacian Density
Case 2: Pure Tracking Using Laplacian Density
Case 3: Tracking and Recognition Using Probabilistic Subspace Density
Case 4: Tracking and Recognition Using Combined Density
Case 5: Still-to-Still Face Recognition
13.4 Video Gallery vs. Video Probes
13.4.1 Parametric Model for Appearance and Dynamic Variations
13.4.2 The Manifold Structure of Subspaces
13.4.3 Video-Based Face Recognition Experiments
13.5 Face Recognition in Camera Network
13.5.1 Face Tracking from Multi-view Videos
13.5.2 Pose-Free Feature Based on Spherical Harmonics
13.5.3 Measure Ensemble Similarity
13.5.4 Experiments
13.5.4.1 Feature Comparison
13.5.4.2 Video-Based Recognition
13.6 Conclusions
References
Chapter 14: Face Recognition at a Distance
14.1 Introduction
14.1.1 Primary Challenges
14.1.2 Optics and Light Intensity
14.1.3 Exposure Time and Blur
14.1.4 Image Resolution
14.1.5 Pose, Illumination and Expression
14.1.6 Approaches
14.1.6.1 High-Definition Stationary Camera
14.1.6.2 Active-Vision Systems
14.1.7 Literature Review
14.1.7.1 Databases
14.1.7.2 Active-Vision Systems
14.1.7.3 NFOV Resource Allocation
14.1.7.4 Very Long Distances
14.1.7.5 3D Imaging
14.1.7.6 Face and Gait Fusion
14.2 Face Capture at a Distance
14.2.1 Target Selection
14.2.2 Recognition
14.3 Low-Resolution Facial Model Fitting
14.3.1 Face Model Enhancement
14.3.2 Multi-Resolution AAM
14.3.3 Experiments
14.4 Facial Image Super-Resolution
14.4.1 Registration and Super-Resolution
14.4.2 Results
14.5 Conclusions
References
Chapter 15: Face Recognition Using Near Infrared Images
15.1 Introduction
15.2 Active NIR Imaging System
15.3 Illumination Invariant Face Representation
15.3.1 Modeling of Active NIR Images
15.3.2 Compensation for Monotonic Transform
15.4 NIR Face Classification
15.4.1 AdaBoost Based Feature Selection
15.4.2 LDA Classifier
15.5 Experiments
15.5.1 Basic Evaluation
15.5.2 Weak Illumination
15.5.3 Eyeglasses
15.5.4 Time Lapse
15.5.5 Outdoor Environment
15.6 Conclusions
References
Chapter 16: Multispectral Face Imaging and Analysis
16.1 Introduction
16.2 Multispectral Imaging
16.2.1 Multispectral Imaging Using Rotating Wheels
16.2.2 Multispectral Imaging Using Electronically Tunable Filters
16.2.3 Multispectral Band Selection
16.3 The IRIS-M3 Face Database
16.4 Complexity-Guided Distance-Based Band Selection
16.4.1 Kernel Density Estimation
16.4.2 Probabilistic Distance Measure
16.4.3 Redundancy Measure
16.5 Experimental Results
16.5.1 Simulated Data
16.5.2 Real Data
16.6 Conclusions
References
Chapter 17: Face Recognition Using 3D Images
17.1 Introduction
17.1.1 3D Face Recognition
17.1.2 3D Face Recognition from Partial Scans: UR3D-PS
17.1.3 3D-aided 2D Face Recognition
17.1.4 3D-aided Profile Recognition
17.2 3D Face Recognition: UR3D
17.3 3D Face Recognition for Partial Scans: UR3D-PS
17.3.1 3D Landmark Detection
17.3.2 Partial Registration
17.3.3 Symmetric Deformable Model Fitting
17.4 3D-aided Profile Recognition: URxD-PV
17.4.1 Profile Extraction from 2D Images
17.4.2 Identification
17.4.3 Integration
17.5 3D-aided 2D Face Recognition: UR2D
17.5.1 3D + 2D Enrollment
17.5.2 2D Authentication
17.5.3 Skin Reflectance Model
17.5.4 Bidirectional Relighting
17.6 Experimental Results
17.6.1 3D Face Recognition
17.6.2 3D Face Recognition for Partial Scans
17.6.3 3D-aided Profile Recognition
17.6.4 3D-aided 2D Face Recognition
Database UHDB11
Database UHDB12
Authentication
2D-3D Identification Experiment
17.7 Conclusions
References
Chapter 18: Facial Action Tracking
18.1 Introduction
18.1.1 Previous Work
18.1.1.1 Rigid Face/Head Tracking
18.1.1.2 Facial Action Tracking
18.1.2 Outline
18.2 Parametric Face Modeling
18.2.1 Eigenfaces
18.2.2 Facial Action Coding System
18.2.3 MPEG-4 Facial Animation
18.2.4 Computer Graphics Models
18.2.5 Candide: A Simple Wireframe Face Model
18.2.6 Projection Models
18.3 Tracking Strategies
18.3.1 Motion-Based vs. Model-Based Tracking
18.3.2 Model-Based Tracking: First Frame Models vs. Pre-trained Models
18.3.3 Appearance-Based vs. Feature-Based Tracking
18.4 Feature-Based Tracking Example
18.4.1 Face Model Parameterization
18.4.1.1 Pose Parameterization
18.4.1.2 Structure Parameterization
18.4.2 Parameter Estimation Using an Extended Kalman Filters
18.4.3 Tracking Process
18.4.4 Tracking of Facial Action
18.5 Appearance-Based Tracking Example
18.5.1 Face Model Parameterization
18.5.1.1 Geometry Parameterization
18.5.1.2 Pose Parameterization
18.5.1.3 Texture Parameterization
18.5.2 Tracking Process
18.5.3 Tracking Example
18.5.4 Improvements
18.6 Fused Trackers
18.6.1 Combining Motion- and Model-Based Tracking
18.6.2 Combining Appearance- and Feature-Based Tracking
18.6.3 Commercially Available Trackers
18.7 Conclusions
References
Chapter 19: Facial Expression Recognition
19.1 Introduction
19.2 Principles of Facial Expression Analysis
19.2.1 Basic Structure of Facial Expression Analysis Systems
19.2.2 Organization of the Chapter
19.3 Problem Space for Facial Expression Analysis
19.3.1 Level of Description
19.3.2 Individual Differences in Subjects
19.3.3 Transitions Among Expressions
19.3.4 Intensity of Facial Expression
19.3.5 Deliberate Versus Spontaneous Expression
19.3.6 Head Orientation and Scene Complexity
19.3.7 Image Acquisition and Resolution
19.3.8 Reliability of Ground Truth
19.3.9 Databases
19.3.10 Relation to Other Facial Behavior or Nonfacial Behavior
19.3.11 Summary and Ideal Facial Expression Analysis Systems
19.4 Recent Advances
19.4.1 Face Acquisition
19.4.1.1 Face Detection
19.4.1.2 Head Pose Estimation
3D Model-Based Method
2D Image-Based Method
19.4.2 Facial Feature Extraction and Representation
19.4.2.1 Geometric Feature Extraction
19.4.2.2 Appearance Feature Extraction
19.4.3 Facial Expression Recognition
Frame-Based Expression Recognition
Sequence-Based Expression Recognition
19.4.4 Multimodal Expression Analysis
19.4.5 Databases for Facial Expression Analysis
19.5 Open Questions
19.6 Conclusions
References
Chapter 20: Face Synthesis
20.1 Introduction
20.2 Face Modeling
20.2.1 Face Modeling from an Image Sequence
20.2.2 Face Modeling from Two Orthogonal Views
20.2.3 Face Modeling from a Single Image
20.3 Face Relighting
20.3.1 Face Relighting Using Ratio Images
20.3.2 Face Relighting from a Single Image
20.3.3 Application to Face Recognition Under Varying Illumination
20.4 Facial Expression Synthesis
20.4.1 Physically Based Facial Expression Synthesis
20.4.2 Morph-Based Facial Expression Synthesis
20.4.3 Expression Mapping
20.4.3.1 Mapping Expression Details
20.4.3.2 Geometry-Driven Expression Synthesis
20.5 Discussion
References
Part III: Performance Evaluation: Machines and Humans
Chapter 21: Evaluation Methods in Face Recognition
21.1 Introduction
21.2 Performance Measures
21.2.1 Open-Set Identification
21.2.2 Verification
21.2.3 Closed-Set Identification
21.2.4 Normalization
21.2.5 Variability
21.3 Evaluation Protocols
21.4 The FERET Evaluations
21.4.1 Database
21.4.2 Evaluation
21.4.3 Summary
21.5 The FRVT 2000
21.5.1 The FRVT 2002
21.6 The MBE 2010 Still Face Track
21.7 Issues and Discussions
21.8 Conclusions
References
Chapter 22: Dynamic Aspects of Face Processing in Humans
22.1 Introduction
22.2 Dynamic Information for Identity
22.2.1 Developmental Aspects
22.2.2 Neurophysiological Aspects
22.2.3 Summary
22.3 Dynamic Information for Expressions
22.3.1 Developmental Aspects
22.3.2 Neurophysiological Aspects
22.4 Conclusions
References
Chapter 23: Face Recognition by Humans and Machines
23.1 Introduction
23.2 What Humans Do with Faces
23.2.1 Recognition and Identification
23.2.2 Visually Based Categorization
23.2.3 Expression Processing
23.3 Characteristics of Human Recognition
23.3.1 Norm-Based Coding
Typicality
Caricatures
Perceptual Adaptation
23.3.2 The "Other-Race Effect" for Faces
23.4 Human-Machine Comparisons
23.5 Identification Accuracy Across Illumination
23.6 Fusing Machine and Human Results
23.7 The "Other-Race Effect"
23.8 Conclusions
References
Part IV: Face Recognition Applications
Chapter 24: Face Recognition Applications
24.1 Introduction
24.2 Face Identification
24.3 Access Control
24.4 Security
24.5 Surveillance
24.6 Smart Cards
24.7 Law Enforcement
24.8 Face Databases
24.8.1 Using Faces to Assist Content-Based Image Retrieval
24.8.2 Using Content-Based Image Retrieval Techniques to Search Faces
24.8.3 Photo Tagging
24.9 Multimedia Management
24.10 Human Computer Interaction
24.10.1 Face Tracking
24.10.2 Emotion Recognition
24.10.3 Face Synthesis and Animation
24.11 Other Applications
24.12 Limitations of Current Face Recognition Systems
24.13 Conclusions
References
Chapter 25: Large Scale Database Search
25.1 Introduction
25.2 Design Objectives and Cost Criteria
25.3 Scalability
25.4 System Throughput and Biometric Accuracy
25.4.1 Multi-stage Comparison
25.4.2 Meaningful Scores
25.4.3 False Positive Identification in Large Scale Systems
25.4.4 Fusion
25.4.4.1 Uniform Distribution
25.4.5 Filtering and Demographic Information
25.4.6 Binning
25.5 Database Sanitization
25.6 Conclusions
References
Chapter 26: Face Recognition in Forensic Science
26.1 Introduction
26.2 Characteristics of Forensic Facial Recognition
26.3 Anthropometric Method
26.4 Use of Facial Recognition in Forensics
26.4.1 Department of Motor Vehicles
26.4.2 Police Agencies
26.5 Future Perspectives
26.6 Conclusions
References
Chapter 27: Privacy Protection and Face Recognition
27.1 Introduction
27.1.1 What is Privacy?
27.1.2 Visual Privacy vs. General Data Privacy
27.2 Factors in Visual Privacy
27.2.1 Absolute and Relative Identification
27.3 Explosion of Digital Imagery
27.3.1 Video Surveillance
27.3.1.1 Camera-Based Sensors
27.3.1.2 Ambient Video Connections
27.3.2 Medical Images
27.3.3 Online Photograph Sharing
27.3.4 Street View
27.3.5 Institutional Databases
27.4 Technology for Enabling Privacy
27.4.1 Intervention
27.4.2 Visual Privacy by Redaction
27.4.3 Cryptographically Secure Processing
27.4.4 Privacy Policies and Tokens
27.5 Systems for Face Privacy Protection
27.5.1 Google Street View
27.5.2 De-identifying Face Images
27.5.3 Blind Face Recognition
27.6 Delivering Visual Privacy
27.6.1 Operating Point
27.6.2 Will Privacy Technology Be Used?
27.7 Conclusions
References
Index