基于人脸子区域加权和LDA的表情识别算法

发布时间:2018-12-07 10:56
【摘要】:人脸表情是人们日常沟通中最重要的一种表现特征,对于情感计算的研究发展具有重要意义。针对静态表情图像,单一的整体模版匹配方法特征维数较高,含有较多无关区域特征,因此很难获得较好的识别效果。本文从几何置信区域角度出发,采用本文给出的基于几何先验的加权策略进行特征融合,结合线性判别分析算法进行研究,获得了较好的效果。本论文的主要工作如下:(1)给出一种基于子区域(置信区域)和多特征的加权融合特征提取算法。针对检测区域存在较多与人脸部分不相关的区域,通过给出基于几何先验的裁剪策略作用于检测区域,得到更加精确的人脸及其子区域。针对人脸图像存在表情无关区域并且单一特征描绘不准确的特点,采用Gabor小波对人脸区域进行特征提取,HOG对置信区域进行特征提取,通过研究置信区域在人脸表情中的先验信息(敏感度)并且实验加以论证,最终给不同的置信区域设置相应权值,得到加权融合特征。在多个数据集上进行实验,验证了本文算法的有效性。(2)给出一种基于类内散度矩阵修正的改进LDA算法。针对初步降维特征缺少判别特性的缺点,通过在类内散度矩阵中引入余弦相似度信息,将每类样本向量与其均值向量之间的夹角余弦值通过线性变换加权乘到对应协方差矩阵中,获得更好的类内聚合度和类间离散度。在多个数据集上进行实验,验证了本文改进算法的有效性。(3)构造一种GRNN神经网络分类器首次应用于人脸表情识别领域。针对传统分类器对小样本非线性数据拟合的局限性,通过对人脸表情数据特点的分析,构造一种GRNN分类器嵌入表情识别算法中,将融合特征作为网络的输入,经过模式层和求和层之后完成训练。在多个数据集上进行实验,验证了本文算法的有效性。
[Abstract]:Facial expression is one of the most important expression features in people's daily communication, which is of great significance to the research and development of emotional computing. For the static facial expression image, the single integral template matching method has higher feature dimension and more independent region features, so it is difficult to obtain a better recognition effect. In this paper, from the point of view of geometric confidence region, the feature fusion based on geometric priori weighting strategy is adopted, and the linear discriminant analysis (LDA) algorithm is used to study the feature fusion, and good results are obtained. The main work of this paper is as follows: (1) A weighted fusion feature extraction algorithm based on sub-region (confidence region) and multi-feature is proposed. Because there are many regions which are not related to the face part in the detection region, a geometric priori based clipping strategy is given to the detection region to obtain more accurate face and its sub-regions. In view of the feature of facial expression independent region and inaccurate description of single feature in face image, Gabor wavelet is used to extract the feature of face region, and HOG is used to extract the feature of confidence region. By studying the priori information (sensitivity) of the confidence region in the facial expression and proving it experimentally, the weighted fusion feature is obtained by setting the corresponding weights for the different confidence regions. Experiments on multiple datasets show the effectiveness of the proposed algorithm. (2) an improved LDA algorithm based on the correction of the intra-class divergence matrix is proposed. In view of the lack of discriminant characteristics in the preliminary dimensionality reduction feature, the cosine similarity information is introduced into the intra-class divergence matrix. The angle cosine value between each class of sample vector and its mean vector is weighted by linear transformation to the corresponding covariance matrix to obtain a better degree of intra-class aggregation and inter-class dispersion. Experiments on multiple datasets show the effectiveness of the improved algorithm. (3) A GRNN neural network classifier is first applied to facial expression recognition. Aiming at the limitation of the traditional classifier to fit the small sample nonlinear data, a GRNN classifier embedded in the facial expression recognition algorithm is constructed by analyzing the features of the facial expression data, and the fusion feature is taken as the input of the network. After the mode layer and summation layer completed the training. Experiments on multiple datasets show the effectiveness of the proposed algorithm.
【学位授予单位】:大连海事大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.41

【参考文献】

相关期刊论文 前3条

1 薛雨丽;毛峡;张帆;;BHU人脸表情数据库的设计与实现[J];北京航空航天大学学报;2007年02期

2 ;Person-independent expression recognition based on person-similarity weighted expression feature[J];Journal of Systems Engineering and Electronics;2010年01期

3 王大伟;周军;梅红岩;张素娥;;人脸表情识别综述[J];计算机工程与应用;2014年20期

相关博士学位论文 前1条

1 杨章静;基于邻域结构的特征提取及其在人脸识别中的应用研究[D];南京理工大学;2014年



本文编号:2367089

资料下载
论文发表

本文链接:https://www.wllwen.com/shoufeilunwen/xixikjs/2367089.html


Copyright(c)文论论文网All Rights Reserved | 网站地图

版权申明:资料由用户fafb4***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱[email protected]