Facial expressions are a valuable source of information that accompanies facial biometrics. Early detection of physiological and psycho-emotional data from facial expressions is linked to the situational awareness module of any advanced biometric system for personal state re/identification. In this article, a new method that utilizes both texture and geometric information of facial fiducial points is presented. We investigate Gauss–Laguerre wavelets, which have rich frequency extraction capabilities, to extract texture information of various facial expressions. Rotation invariance and the multiscale approach of these wavelets make the feature extraction robust. Moreover, geometric positions of fiducial points provide valuable information for upper/lower face action units. The combination of these two types of features is used for facial expression classification. The performance of this system has been validated on three public databases: the JAFFE, the Cohn-Kanade, and the MMI image.
Keywords:Facial expression; Gauss–Laguerre wavelet; Feature fusion; Texture analysis
Automatic facial expression recognition (AFER) is of interest to researchers because of its importance for facial biometric-based intelligent support systems. It provides a behavioral measure to assess emotions, cognitive processes, and social interaction . Examples of applications of AFER include robotics, human–computer interface, behavioral science, animations and computer games, educational software, emotion processing, and fatigue detection. Due to multiple limitations and difficulties such as occlusion, lighting conditions, and variation of expressions across the population, or even for an individual, having an automatic system helps in creating intelligent visual media for understanding different expressions. Moreover, this understanding helps in building meaningful and responsive HCI interfaces.
Each AFER implements three main functions: face detection and tracking, feature extraction, and expression classification. The first attempt towards the AFER was taken in 1978 by Suwa et al.  who presented a system for facial expression analysis from video, and tracking 20 points as features. Before that, only two ways existed for FER : (a) human observer-based coding system which is subjective, time-consuming, and hard to standardize, and (b) electromyography-based systems which is invasive (needs sensors on the face). The muscle actions result in various facial behaviors and motions, and later on can be used to represent the corresponding facial expressions. These assumptions became the basis for developing the following systems for coding multiple facial expressions and emotions:
1 The Facial Action Coding System (FACS)—Ekman and Friesen .
2 The Facial Animation parameters (FAPs)—MPEG-4 standard, SNHC .
In the study of Ekman and Friesen , it was shown that those six emotions—anger, disgust, fear, happiness, sadness, and surprise—are “discriminable within any one literate culture”. Sometimes, a neutral expression is considered as a seventh expression. The FACS describes facial expressions in terms of action units (AUs). It explains how to identify different facial expressions based on the application of varying facial muscles individually or in groups. It contains 46 AUs, which are basic facial movements, corresponding to different muscle activities. It is not an easy task to recognize AUs automatically, given an image or a video. There are two main approaches to AU recognition:
1. Processing 2D static images.
2. Processing image sequences.
The first one, which is more difficult than image sequence since less information is available, often uses feature-based methods. Using only one image for expression recognition needs robust and highly distinctive features to cope with variations in human subjects or imaging conditions . There are several methods to process still images. One of them is PCA-based holistic representations and feed forward neural networks (NN) for classification proposed by Cottrell and Metcalfe . Chen and Huang  used a clustering-based feature extraction to recognize only three facial expressions. Eigenface feature extraction accompanied by principal component analysis (PCA) is proposed by Turk and Pentland . Holistic representations and NNs are applied to pyramid-structured images by Rahardja et al. . Feng et al.  applied local binary pattern for feature extraction, and used a linear programming technique as the classifier. Deformable models were utilized by Lanitis et al.  to capture variations in shape and grey-level appearance. In the second approach, an image sequence displays one expression. The neutral face is used as a baseline face, and FER is in based on the difference between the baseline and the following input face image. Preliminary work on facial expressions, by tracking the motion of 20 identified spots, has been done by Suma et al. . Motion tracking of facial features in image sequences is performed by optical flow, and expressions are classified into six basic classes . The Fourier transform was utilized for feature extraction, and a fuzzy C-means clustering was applied to build a spatiotemporal model for each expression in .
Facial coding is normally performed in two different ways: holistic and analytic. In the holistic approach, the face is treated as a whole. Different methods are presented in this approach including [16,17]: optical flow, Fisher linear discriminates, NN, active appearance models (AAMs), and Gabor filters. In the analytic approach, local features are used instead of the whole face, namely, fiducial points describe the position of important points on the face (e.g., eyes, eyebrows, mouth, nose, etc.), together with the geometry or texture features around these points .
Gabor filters are widely used in texture analysis. These filters model simple cells in the primary visual cortex. Zafeiriou and Pitas  showed the best performance of Gabor filters in both analytic and holistic approaches. Gabor filters have been used for expression classification in [4,20]. Although the Gabor filters show high performance in FER, the main problems using this filter is how to select the optimum one, in terms of scale and orientation. For example in , 40 filters (5 scales and 8 orientations) are used. Because of the large number of convolution operations, it needs large amounts of memory and computational cost. Moreover, with the small training samples, the dimensionality is really high . Normally, two types of facial features are used: permanent and transient. Permanent features include eyes, lips, brows and cheeks, and transient features include facial lines, wrinkles, and furrows. The eyebrows and mouth play the main role in facial expressions. Pardas and Bonafonte  showed that expressions such as surprise, joy, and disgust have much higher recognition rate, since clear motion of the mouth and the eyebrows are involved.
In this article, both the combined texture and the geometric information of face fiducial points are used to code different expressions. Gauss–Laguerre (GL) wavelets are used for texture analysis and the positions of 18 fiducial points represent the deformation of the eyes, eyebrows, and mouth. The combination of these features is used for expression classification. The K-nearest neighbor (KNN) is used for classifying expressions based on closest training examples in the feature space. The rest of the article is organized as follows: in “G–L wavelets” section, a mathematical description of GL circular harmonic wavelets (CHW) is presented; feature extraction approach in addition to the classification method are mentioned in “The proposed approach” section; experimental results, using the JAFFE, the Cohn-Kanade, and the MMI face databases are reported in “Experiment results” section; finally, a conclusion is drawn in “Conclusion” section.
The CHWs are polar-separable wavelets, with harmonic angular shape. They are steerable in any desired direction by simple multiplication with a complex steering factor, thus they are referred to as self-steerable wavelets. The CHWs were first introduced in  and utilize the concepts from circular harmonic functions (CHFs) employed in optical correlations for rotation-invariant pattern recognition. A CHF is represented in polar coordinates  as
The same functions also appear in harmonic tomographic decomposition, and have been considered for the analysis of local image symmetry. CHFs have been employed for defining of rotation-invariant pattern signatures. A family of orthogonal CHWs, forming a multi-resolution pyramid referred to as the circular harmonic pyramid (CHP), is utilized for coefficient generation and coding. Each CHW, pertaining to the pyramid, represents the image by translated, dilated, and rotated versions of a CHF. At the same time, for a fixed resolution, the CHP orthogonal system provides a local representation of the given image around a point in terms of CHFs. The self-steerability of each component of the CHP can be exploited for pattern analysis in the presence of rotation (other than translation and dilation), in particular, for pattern recognition, irrespective of orientation.
CHFs, these are complex, polar separable filters, characterized by harmonic angular shape, which allows building rotationally invariant descriptors. A scale parameter is also introduced to perform a multi-resolution analysis. The GL filters from the family of orthogonal functions, satisfying the wavelet admissibility condition required for multi-resolution wavelet pyramid analysis, are used. Similar to Gabor wavelets, any image may be represented by translated, dilated, and rotated replicas of the GL wavelet. For a fixed resolution, the GL CHFs provide a local representation of the image in the polar coordinate system centered at a given point, named the pivot point. This representation is called the GL transform . They are characterized by a CHF, which is a complex polar separable filter with a harmonic angular shape, represented in polar coordinates.
For a given image , the expression is the representation in the polar coordinate space centered at the pivot . can be decomposed in terms of CHF, based on its periodic characteristic with respect to θ:
The expansion of radial profiles is represented by the series of weighted orthogonal functions which are GL CHFs:
As any CHF, GL functions are self-steering, i.e., they are rotated by angle φ when multiplied by factor . In particular, the real and imaginary parts of each GL function form a geometrical pair in phase quadrature. Moreover, GL functions are isomorphic to their Fourier transform. It is shown in  that each GL function defines an admissible dyadic wavelet. Thus, the redundant set of wavelets; corresponding to different GL functions, represent a self-steering pyramid, utilized for local and multiscale image analysis. The real part of the GL function is depicted in Figure 1a. An important feature, applicable to facial expression recognition, is that GL function with various degrees of freedom can be tuned to significant visual features. For example, for n = 1, GLs are tuned to edges, for n = 2 to ridges, for n = 3 to equiangular forks, for n = 4 to orthogonal crosses, irrespective of their actual orientation . Given an image I(x,y), for every site of the plane, it is possible to perform the GL analysis by convolving it with each properly scaled GL function as follows:
where a2j are the dyadic scale factors. Figure 1b shows a plot of GL-CHFs for a fixed dyadic scale factor and variables n and k.
Figure 1. (a) Real part of GL function; n = 4, K = 1, j = 2. (b) Real part of GL CHFs. Variation of filter in spatial domain with fixed scale, k = 0,1, …,4, and n = 1,2, …,5.
The proposed approach
In this section, the algorithmic steps of proposed approach are explained. For each input image, the face area is localized first. Then, the features are extracted based on GL filters, and, finally, the KNN classification is used for expression recognition.
Preprocessing is normally performed before feature extraction for the FER, in order to increase system performance. The aim of this step which includes scaling, intensity normalization, and size equalization, is to have images, which only contain a face, expressing a certain emotion. Sometimes, histogram equalization is also used to adjust image brightness and contrast. To normalize the face, the image with the neutral expression is scaled, so that it has a fixed distance between the eyes. No intensity normalization has been considered, since the GL filters can extract an abundance of features without any preprocessing.
For face localization, we used the well-known algorithm by Viola-Jones. This method is based on the Haar-like features and the AdaBoost learning algorithm. The localized face image is cropped automatically and resized to 128 × 96. The next step is to normalize the geometry, so that the recognition method is robust to individuals’ differences. For the normalization, we need to extract the eye locations. Figure 2 shows an example of the normalization procedure. The output of this step is directly used for textural feature extraction. In this experiment, if the face was not cropped well, it was done manually. The purpose of this article is to propose a new feature extraction method for facial expression recognition.
Figure 2. Normalization procedure (left to right): (a) input image (from the JAFFE database), (b) the extracted AAM fiducial points, (c) normalized image to have fixed distance between eyes, (d) the localized and resized face.
To extract facial features, the AAM is utilized. It is widely used in face recognition and expression classification, due to its remarkable performance in extracting face shape and texture information. AAM  contains both a statistical model and texture information of the face, and performs matching via finding the model parameters. These minimize the difference between the image and the synthesized model. We used 18 fiducial points to model the face and distinguish facial expressions. The features to distinguish the latter are explained in section “AAM”. In our experiment, the AAM model has been created using different images from three databases with different expressions. All images were roughly resized and cropped to 256 × 256. After creating the AAM, the eye positions in each image is automatically extracted, and the line, which connects the inner corner of the eyes, is used for normalization.
AAM  is an algorithm for matching a statistical shape model to an image with both shape and appearance variations. For example in facial expression recognition, these deformations are both facial expression changes and pose variations along with the texture variations caused by illuminations. These variations are represented by a linear model like PCA. So, the main purpose of the AAM is first to define a model and then finding the best matched parameters between the given new image and built model using a fitting algorithm.
Normally, the fitting algorithm is repeated until the parameters of shape and appearance satisfy particular values. The shape model is created by combining the vectors constructed from the points of the labeled images.
where pi is the parameters of shape. The mean shape s0 and m shape basis vectors si are obtained by using PCA for training data. They are the m eigenvectors which correspond to the m largest eigenvalues. Before applying PCA, the landmark points are normalized. The appearance variation is represented by a linear combination of a mean appearance A0(x) and n appearance basis vectors Ai(x) as
where α is the appearance parameter. After finding the shape and appearance parameters, a piecewise affine warp is used to construct the AAM by locating each pixel of appearance onto the inner side of the current shape. The goal is to minimize the difference between the warped image and the appearance image.
The feature vector consists of two types: textural features, which are extracted globally by applying the GL filter, and the geometric information of local fiducial points.
Textural feature extraction
To extract facial features we used GL functions, which provide a self-steering pyramidal analysis structure. By the proper choice of the GL function parameters (scale, order, and degree of CHF, which is explained further), it is possible to generate a set of redundant wavelets, and, thus, an accurate extraction of complex texture features of facial expression. Redundancy of the wavelet transform and a higher degree of freedom in the selected parameters of the GL function makes it highly suitable for facial texture extraction, compared to those of the Gabor wavelets. The self-steering pyramid structure of GL-based analysis is superior to Gabor wavelets, due to its ability to choose parameters of GL functions. To take advantage of the degrees of freedom, provided by the GL function, parameters of filters have to be tuned to significant visual features and texture patterns, so that it can extract desirable frequency information of facial texture patterns. In our experiment, it was found that the best results are obtained for n = 2, via running several simulations, in which the filters are convolved directly with the face image. Other parameters (scale and degree) are also adjusted in the same manner. The best results have been obtained for a = 2, k = 1. The output of the filtered image has the same size as the input image 128 × 96 and is complex-valued. Figure 3 shows the examples of GL filtering with various parameters a and k. Unlike the Gabor filter, there is no need to construct multiple filters, and a single tuned GL filter is sufficient for feature selection. The size of the textural feature vector (128 × 96 = 12288) is quite large for fusion with geometric information points. If the dimension of the input vector is large, and the data are highly correlated, there are several methods to remove redundancy, such as PCA. During PCA, the components of input vectors are orthogonalized, which means they are no more correlated with each other. The components are put in order, so that ones with largest variation come first and those with low variation are eliminated. The data are usually normalized before performing PCA to have zero mean and unity variance. In our case, the size of the feature vector, after down-sampling, is 3072 (by factor 4), which is further reduced to 384 samples per image using the PCA.
Figure 3. The GL pyramid response for three different filters.
Geometric feature extraction
As mentioned in section “Preprocessing”, 18 fiducial points are put together to construct the model. These points are extracted automatically, based on the AAM model. The coordinates of these fiducial points are used to calculate 15 Euclidean distances. Different expressions result in different deformations of the corresponding facial components, especially near the eyes and mouth. The selected geometric feature extractions are performed as follows.
1. The AAM is applied to extract the 18 points. The distances are labeled by d’s as shown in Figure 4.
Figure 4. Geometric feature selection based on fiducial points.
2. For the upper portion of the face, ten distances are calculated, according to Table 1.
Table 1. Upper (distances 1–10) and lower (distances 11–15) face geometric distance
3. For the lower portion of the face, five distances are calculated, according to Table 1.
The final feature vector is a combination of two features. Since the dimension of the texture feature vector is 384, and the dimension of the geometric feature vector is 15, the total size is 399. During the simulations, it was observed that geometric features are more important than texture ones. To find the appropriate weight coefficients for both types of features, the average recognition rate versus different weight coefficient for geometric features have been monitored. Let us assume that the textural feature is FT and the geometrical feature is FG. The final feature vector is . The average recognition rate has been obtained using the “Leave-One-Out” approach in three trials (see section “Experiment results”). In each trial, and for each database, one random image is used for testing and the rest utilized for training. In this case, the three trials, generally speaking, have different sets for train/test schemes. The coefficient for geometric features varied from 0.5 by step-size 0.01. Figure 5 shows the average recognition rate versus geometric coefficient weight. The best average recognition rate, within coefficients, was β = 0.69 and α = 0.31.
Figure 5. Variation of coefficient for geometric features versus average recognition rate.
KNN is a well-known instance-based classification algorithm , which does not make any assumptions on the underlying data distribution. The similarity between the test sample and the other samples, used in training, is calculated, and k most similar set samples are determined. The class of the test sample is then found, based on the classes of its KNNs.
This classification suits the multi-class classification, in which the decision is based on a small neighborhood of similar objects. In the classification procedure, the training data are first plotted in n-dimensional space, where n is the number of features. Each of these consists of a set of vectors labeled with their associated class (arbitrary number of classes). The number k defines how many neighbors influence the classification. Based on the suggestion made in , the better classification is obtained when k = 3. This suggestion was based on different experiments and observing the classification rate on JAFFE database. The same classifier is used for the Cohn-Kanade and the MMI database as well.
To evaluate the performance of the proposed method, the JAFFE image database , the Cohn-Kanade, and the MMI databases have been used. Eighteen fiducial points have been obtained via the AAM model, and two types of information have been extracted: geometric and textural. MATLAB was used for implementation.
The JAFFE database contains 213 images with a resolution of 256 × 56 pixels. Six basic expressions, in addition to the neutral face (seven in total), are considered in this database. These expressions are happiness, sadness, surprise, anger, disgust, and fear. The images were taken from 10 Japanese female models, and each emotion was subjectively tested on 60 Japanese volunteers. There are three samples, corresponding to each facial expression of each person. Figure 6 shows examples of each expression. In the JAFFE database, each subject posed 3 or 4 times for each expression.
Figure 6. Various expressions from the JAFFE database (left to right): anger, disgust, fear, happiness, sadness, surprise, and neutral.
The Cohn-Kanade database , which is widely used in literature for facial expression analysis, consists of approximately 500 image sequences from posers. Each sequence goes from neutral to target display, with the last frame being AU coded. The subjects range in age from 18 to 30 years where 65% are females, 15% are African-American, and 3% are Asian or Latino. The images contain six different facial expressions: anger, disgust, fear, happiness, sadness, and anger. Figure 7 shows examples of some expressions.
Figure 7. Examples of various expressions from the Cohn-Kanade database.
The MMI Facial Expression Database  was created by the Man–machine Interaction Group, Delft University of Technology, Netherlands. This database is initially established for research on machine analysis of facial expressions. The database consists of over 2,900 videos and high-resolution still images of 75 subjects of both genders, who range in age from 19 to 62 years and have either a European, Asian, or South American ethnic background. These samples show both non-occluded and partially occluded faces, with or without facial hair and glasses. In our experiments, 96 image sequences were selected from the MMI database. The only selection criterion is that a sequence can be labeled as one of the six basic emotions . The sequences come from 20 subjects, with 1–6 emotions per subject. The neutral face and three peak frames of each sequence (hence, 384 images in total) were used for 6-class expression recognition. Some sample images from the MMI database are shown in Figure 8.
Figure 8. The sample face expression images from the MMI database.
Three different methods were selected to verify the accuracy of this system:
1. “Leave-One-Out” cross-validation: For each expression from each subject, one image is left out, and the rest are used for training .
2. Cross-validation: the database is randomly partitioned to ten distinct segments, and nine partitions are used for training, with the remaining partition used to test performance. The procedure is repeated so that every equal-sized set is used once as the test set. Finally, an average of ten experiments is been reported .
3. Expresser-based segmentation: the database is divided into several segments; each of them corresponds to a subject. For the JAFFE database, 213 expression images, posed by 10 subjects, are partitioned into 10 segments, each corresponding to one subject . For the Cohn-Kanade database, 375 video sequences are been used, that is, over 4,000 images. Nine out of ten segments are used for training and the tenth for testing. It is repeated, so each of the ten segments is used in testing. The average results for those ten experiments are been reported.
Table 2 shows the average success rate for different approaches. The confusion matrix for the “Leave-One-Out” method is presented in Table 3. For the average recognition rate, nine out of ten expression image classes are used for training with the last one being the testing set each time. This procedure is repeated for each subject.
Table 2. Recognition accuracy (%) on the JAFFE database for different approaches
Table 3. Confusion matrix for the Leave-One-Out method (the JAFFE database)
The confusion matrix is a 7 × 7 matrix containing information of actual class label in both rows and columns. The diagonal entries are the rounded average successful recognition rates in ten trials, while the off-diagonal entries correspond to misclassifications. The total recognition rate is 96.71%; the best rate is for surprise and happiness expressions, and the lowest one is for anger. The performance of the proposed method has been compared against some published methods in Table 4.
Table 4. Comparison with other methods on the JAFFE database
Table 5 shows the average success rate for different approaches. The confusion matrix for the “Leave-One-Out” method is also presented in Table 6. Different number of images has been selected for experiments in literature, and images are selected based on the different criteria (see Table 7). In this experiment, 375 image sequences have been selected from 97 subjects so that the criterion was to be that of a sequence labeled as one of the six basic emotions, with the video clip being longer than ten frames. The total recognition rate is 92.2%; the best rate is for the happiness expression, and the lowest one for sadness. The performance of the proposed method has been compared against some published methods in Table 7. Although several frames from each video sequence are used, we consider them as “static” images without using any temporal information.
Table 5. Recognition accuracy (%) on the Cohn-Kanade database for different approaches
Table 6. Confusion matrix for the Leave-One-Out method (the Cohn-Kanade database)
Table 7. Comparison of facial expression recognition for the Cohn-Kanade database
Table 8 shows the average success rate for different approaches. The total recognition rate is 87.6%; the best rate is for the happiness expression, and the lowest one being for sadness.
Table 8. Recognition accuracy (%) on the MMI database for different approaches
The experimental results show that the proposed method meets the criteria of accuracy and efficiency for facial expression classification. It outperforms, in terms of accuracy, some other existing approaches that used the same database. The average recognition rate of the proposed approach is 96.71%, when using “Leave-One-Out” method, and 95.04% when using cross-validation for estimating its accuracy on the JAFFE database. For the Cohn-Kanade database, the average recognition rate of the proposed approach is 92.20%, when using “Leave-One-Out” method, and 90.37% when using the cross-validation for estimating its accuracy. For the MMI database, the average recognition rate of the proposed approach is 87.66%, when using the “Leave-One-Out” method, and 85.97% when using cross-validation for estimating its accuracy. Few articles reported the accuracy on emotion recognition on the MMI. Most of them reported the recognition rate on the AU. Sánchez et al.  achieved 92.42% but it is not clear how many video sequences were used. Cerezo et al.  reported 92.9% average recognition rate on 1,500 still images of mixed MMI and CK databases. Shan et al.  used 384 images from the MMI, and the average recognition rate of 86.9% was reported.
For the “Leave-One-Out” procedure in Table 5, all image sequences are divided into six classes, each corresponding to one of the six expressions. Four sets, each containing 20% of the data for each class, chosen randomly, were created to be used as training sets, while the other 20% were used as the test set.
The procedure of classification is repeated five times. In each cycle, the samples in the testing set are included into the current training set. The new set of samples (20% of the samples for each class) is again formed to have a new test set, and the remaining ones are the new training set. Finally, the average classification rate is the mean of the success rate in classification.
This article proposes a combined texture/geometric feature selection for facial expression recognition. The GL circular harmonic filter is applied, for the first time, to facial expression identification. The advantage of this filter is its rich frequency extraction capability for texture analysis, as well as being a rotation-invariant and a multiscale approach. The geometric information of fiducial points is added to the texture information to construct the feature vector. Given a still expression image, normalization is performed first. The extracted features are passed through a KNN classifier. Experiments showed that the selected features represent the facial expression effectively, demonstrating an average success rate of 96.71, 92.2, and 87.66% when following the “Leave-One-Out” strategy for accuracy estimation, as well as 95.04, 90.37, and 85.97% when following the cross-validation method. These are comparable with the results, reported for other approaches on both databases, namely, the presented results demonstrate better success rate for the JAFFE database, and have the same success range as the approaches for the Cohn-Kanade database. Further development of the proposed approach includes perfecting the local and global feature selections, as well as testing using other classification techniques.
M Yuki, WW Maddux, T Masuda, Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States. J. Exp. Soc. Psychol. 43(2), 303–311 (2007). Publisher Full Text
M Suwa, N Sugie, K Fujimora, A preliminary note on pattern recognition of human emotional expression, in Proceedings of the Fourth International Joint Conference on Pattern Recognition (Kyoto, Japan, 1978), pp. 408–410
S Bashyal, GK Venayagamoorthy, Recognition of facial expressions using Gabor wavelets and learning vector quantization. Eng. Appl. Artif. Intell. 21, 1056–1064 (2008). Publisher Full Text
G Cottrell, J Metcalfe, Face, Gender and Emotion Recognition Using Holons, in. in Advances in Neural Information Processing Systems, 1991, ed. by Morgan K, San M, 3rd edn., pp. 564–571 (1991) (ed, 1991), . by
X Chen, T Huang, Facial expression recognition: a clustering based approach. Pattern Recognit. Lett. 24, 1295–1302 (2003). Publisher Full Text
M Turk, A Pentland, Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71–86 (1991). Publisher Full Text
A Rahardja, A Sowmya, W Wilson, A neural network approach to component versus holistic recognition of facial expressions in images. Intell. Robots Comput. Vis. X: Algorithms and Techniques 1607, 62–70 (1991)
X Feng, M Pietikäinen, A Hadid, Facial expression recognition based on local binary patterns. Pattern Recognit. Image Anal. 17(4), 592–598 (2007). Publisher Full Text
A Lanitis, C Taylor, T Cootes, Automatic interpretation and coding of face images using flexible models. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 743–756 (1997). Publisher Full Text
M Suma, N Sugie, K Fujimora, A preliminary note on pattern recognition of human emotional expression, in Proceedings of the 4th International Joint Conference on Pattern Recognition (Kyoto, Japan, 1978), pp. 408–410
T Xiang, MKH Leung, SY Cho, Expression recognition using fuzzy spatio-temporal modeling. Pattern Recognit. 41(1), 204–216 (2008). Publisher Full Text
I Essa, A Pentland, Coding, analysis, interpretation and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 757–763 (1997). Publisher Full Text
IR Fasel, MS Bartlett, JRA Movellan, A comparison of Gabor filter methods for automatic detection of facial landmarks (IEEE 5th International Conference on Automatic Face and Gesture Recognition, Washington, DC, 2002), pp. 242–248
B Fasela, J Luettinb, Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003). Publisher Full Text
M Pardas, A Bonafonte, Facial animation parameters extraction and expression recognition using Hidden Markov Models. Signal Process: Image Commun 17, 675–688 (2002). Publisher Full Text
H Ahmadi, A Pousaberi, A Azizzadeh, M Kamarei, An efficient iris coding based on Gauss-Laguerre wavelets, in 2nd IAPR/IEEE International Conference on Biometrics, Seoul. South Korea 4642, 917–926 (2007)
T Kanade, JF Cohn, Y Tian, Comprehensive database for facial expression analysis (Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000), pp. 46–53
M Pantic, MF Valstar, R Rademaker, L Maat, Web-based database for facial expression analysis (Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’05), Amsterdam, Netherlands, 2005), pp. 317–321
C Shan, G Shaogang, PW McOwan, Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009). Publisher Full Text
M Lyons, J Budynek, S Akamatsu, Automatic classification of single facial images. IEEE Trans. Pattern Anal. Mach. Intell. 21, 1357–1362 (1999). Publisher Full Text
Z Zhang, M Lyons, M Schuster, S Akamatsu, Comparison between geometry based and Gabor wavelet based facial expression recognition using multi layer perceptron (Proceeding 3rd International Conference on Automatic Face and Gesture Recognition, Nara, Japan, 1998), pp. 454–459
W Liejun, Q Xizhong, Z Taiyi, Facial expression recognition using improved support vector machine by modifying kernels. Inf. Technol. J. 8(4), 595–599 (2009). Publisher Full Text
G Guo, CR Dyer, Learning from examples in the small sample case: face expression recognition. IEEE Trans. Syst. Man Cybern. B 35(3), 477–488 (2005). Publisher Full Text
Y Zhan, J Ye, D Niu, P Cao, Facial expression recognition based on Gabor wavelet transformation and elastic templates matching. Int. J. Image Graph. 6(1), 125–138 (2006). Publisher Full Text
MS Bartlett, G Littlewort, I Fasel, JR Movellan, Real time face detection and facial expression recognition: development and applications to human computer interaction, in IEEE Conference on Computer Vision and Pattern Recognition, Madison. Wisconsin 5, 53–53 (2003)
G Littlewort, M Bartlett, I Fasel, J Susskind, J Movellan, Dynamics of facial expression extracted automatically from video, 5th edn. (Proceeding of IEEE Conf. Computer Vision and Pattern Recognition, Workshop on Face Processing in Video, New York, USA, 2004), pp. 80–88
SP Aleksic, KA Katsaggelos, Automatic facial expression recognition using facial animation parameters and multi-stream HMMS. IEEE Trans. Inf. Forensics Secur. 1(1), 3–11 (2006). Publisher Full Text
I Kotsia, I Pitas, Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process 16(1), 172 (2007). PubMed Abstract
E Cerezo, I Hupont, S Baldassarri, S Ballano, Emotional facial sensing and multimodal fusion in a continuous 2D affective space. Ambient Intell. Hum. Comput. 3, 31–46 (2012). Publisher Full Text