ZTE Communications ›› 2017, Vol. 15 ›› Issue (S2): 30-37.DOI: 10.3969/j.issn.1673-5188.2017.S2.005
• Special Topic • Previous Articles Next Articles
REN Fuji, Kazuyuki Matsumoto
Received:
2017-08-11
Online:
2017-12-25
Published:
2020-04-16
About author:
REN Fuji (ren@is.tokushima-u.ac.jp) received his B.E. and M.E. degrees from Beijing University of Posts and Telecommunications, Beijing, China in 1982 and 1985, respectively. He received his Ph.D. degree in 1991 from Hokkaido University, Japan. From 1991, he worked at CSK, Japan, where he was a chief researcher of NLP. From 1994 to 2000, he was an associate professor in the Faculty of Information Sciences, Hiroshima City University, Japan. He became a professor in the Faculty of Engineering of the University of Tokushima, Japan in 2001. His research interests include natural language processing, artificial intelligence, language understanding and communication, and affective computing. He is a member of IEICE, CAAI, IEEJ, IPSJ, JSAI, and AAMT, and a senior member of IEEE. He is a fellow of the Japan Federation of Engineering Societies. He is the president of the International Advanced Information Institute.|Kazuyuki Matsumoto (matumoto@is.tokushima-u.ac.jp) received the Ph.D. degree in 2008 from Tokushima University, Japan. He is currently an assistant professor of Tokushima University. His research interests include affective computing, emotion recognition, artificial intelligence, and natural language processing. He is a member of IPSJ, ANLP, JSAI, IEICE and IEEJ.
REN Fuji, Kazuyuki Matsumoto. Emotion Analysis on Social Big Data[J]. ZTE Communications, 2017, 15(S2): 30-37.
Favorite tag | Unfavorite tag | Personal tag | |||
---|---|---|---|---|---|
Anime | 746 | Insect | 688 | Female | 1258 |
Game | 575 | Horror | 220 | Male | 891 |
Music | 575 | Cockroach | 194 | Blood type A | 584 |
Cat | 425 | Male | 157 | Blood type O | 473 |
VOCALOID | 340 | Female | 155 | Blood type B | 409 |
Comic | 333 | Cat | 153 | High-school student | 381 |
Voice Actor | 309 | Anime | 151 | Student | 296 |
Table 1 Examples of frequently appearing tags
Favorite tag | Unfavorite tag | Personal tag | |||
---|---|---|---|---|---|
Anime | 746 | Insect | 688 | Female | 1258 |
Game | 575 | Horror | 220 | Male | 891 |
Music | 575 | Cockroach | 194 | Blood type A | 584 |
Cat | 425 | Male | 157 | Blood type O | 473 |
VOCALOID | 340 | Female | 155 | Blood type B | 409 |
Comic | 333 | Cat | 153 | High-school student | 381 |
Voice Actor | 309 | Anime | 151 | Student | 296 |
Test data (labeled) | Training data (labeled) | Pretraining data (unlabeled) | ||
---|---|---|---|---|
4483 tweets (Male: 15 users) | 4479 tweets (Female: 15 users) | MAC: 46,979 sentences | AAC: 198,968 sentences | 1,100,000 tweets |
Table 2 Evaluation test set and training data
Test data (labeled) | Training data (labeled) | Pretraining data (unlabeled) | ||
---|---|---|---|---|
4483 tweets (Male: 15 users) | 4479 tweets (Female: 15 users) | MAC: 46,979 sentences | AAC: 198,968 sentences | 1,100,000 tweets |
Label | Male | Female | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score | |
Anxiety | 0.292 | 0.480 | 0.363 | 0.261 | 0.657 | 0.373 |
Love | 0.220 | 0.791 | 0.344 | 0.251 | 0.775 | 0.379 |
Joy | 0.931 | 0.905 | 0.918 | 0.940 | 0.950 | 0.945 |
Sorrow | 0.226 | 0.599 | 0.328 | 0.194 | 0.570 | 0.289 |
Neutral | 0.397 | 0.386 | 0.391 | 0.372 | 0.297 | 0.331 |
Anger | 0.141 | 0.755 | 0.238 | 0.168 | 0.663 | 0.268 |
Surprise | 0.251 | 0.110 | 0.153 | 0.314 | 0.116 | 0.169 |
Average | 0.351 | 0.575 | 0.391 | 0.357 | 0.575 | 0.394 |
Table 3 Experimental results under Condition 2)
Label | Male | Female | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score | |
Anxiety | 0.292 | 0.480 | 0.363 | 0.261 | 0.657 | 0.373 |
Love | 0.220 | 0.791 | 0.344 | 0.251 | 0.775 | 0.379 |
Joy | 0.931 | 0.905 | 0.918 | 0.940 | 0.950 | 0.945 |
Sorrow | 0.226 | 0.599 | 0.328 | 0.194 | 0.570 | 0.289 |
Neutral | 0.397 | 0.386 | 0.391 | 0.372 | 0.297 | 0.331 |
Anger | 0.141 | 0.755 | 0.238 | 0.168 | 0.663 | 0.268 |
Surprise | 0.251 | 0.110 | 0.153 | 0.314 | 0.116 | 0.169 |
Average | 0.351 | 0.575 | 0.391 | 0.357 | 0.575 | 0.394 |
Label | Male | Female | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score | |
Anxiety | 0.288 | 0.478 | 0.360 | 0.244 | 0.629 | 0.351 |
Love | 0.219 | 0.797 | 0.343 | 0.251 | 0.777 | 0.380 |
Joy | 0.931 | 0.905 | 0.918 | 0.940 | 0.950 | 0.945 |
Sorrow | 0.228 | 0.598 | 0.331 | 0.196 | 0.576 | 0.292 |
Neutral | 0.395 | 0.386 | 0.390 | 0.369 | 0.280 | 0.318 |
Anger | 0.141 | 0.752 | 0.238 | 0.164 | 0.620 | 0.260 |
Surprise | 0.249 | 0.103 | 0.146 | 0.247 | 0.120 | 0.161 |
Average | 0.350 | 0.574 | 0.389 | 0.344 | 0.564 | 0.387 |
Table 4 Experimental results with Condition 3) (MSM)
Label | Male | Female | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score | |
Anxiety | 0.288 | 0.478 | 0.360 | 0.244 | 0.629 | 0.351 |
Love | 0.219 | 0.797 | 0.343 | 0.251 | 0.777 | 0.380 |
Joy | 0.931 | 0.905 | 0.918 | 0.940 | 0.950 | 0.945 |
Sorrow | 0.228 | 0.598 | 0.331 | 0.196 | 0.576 | 0.292 |
Neutral | 0.395 | 0.386 | 0.390 | 0.369 | 0.280 | 0.318 |
Anger | 0.141 | 0.752 | 0.238 | 0.164 | 0.620 | 0.260 |
Surprise | 0.249 | 0.103 | 0.146 | 0.247 | 0.120 | 0.161 |
Average | 0.350 | 0.574 | 0.389 | 0.344 | 0.564 | 0.387 |
Label | Male | Female | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score | |
Anxiety | 0.271 | 0.442 | 0.336 | 0.224 | 0.554 | 0.319 |
Love | 0.213 | 0.769 | 0.333 | 0.247 | 0.774 | 0.375 |
Joy | 0.898 | 0.968 | 0.932 | 0.918 | 0.958 | 0.938 |
Sorrow | 0.262 | 0.584 | 0.362 | 0.226 | 0.692 | 0.341 |
Neutral | 0.446 | 0.296 | 0.356 | 0.431 | 0.321 | 0.368 |
Anger | 0.134 | 0.706 | 0.225 | 0.217 | 0.337 | 0.264 |
Surprise | 0.178 | 0.251 | 0.208 | 0.158 | 0.225 | 0.186 |
Average | 0.343 | 0.574 | 0.393 | 0.346 | 0.552 | 0.399 |
Table 5 Experimental results with Condition 4) (MMM)
Label | Male | Female | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score | |
Anxiety | 0.271 | 0.442 | 0.336 | 0.224 | 0.554 | 0.319 |
Love | 0.213 | 0.769 | 0.333 | 0.247 | 0.774 | 0.375 |
Joy | 0.898 | 0.968 | 0.932 | 0.918 | 0.958 | 0.938 |
Sorrow | 0.262 | 0.584 | 0.362 | 0.226 | 0.692 | 0.341 |
Neutral | 0.446 | 0.296 | 0.356 | 0.431 | 0.321 | 0.368 |
Anger | 0.134 | 0.706 | 0.225 | 0.217 | 0.337 | 0.264 |
Surprise | 0.178 | 0.251 | 0.208 | 0.158 | 0.225 | 0.186 |
Average | 0.343 | 0.574 | 0.393 | 0.346 | 0.552 | 0.399 |
Male | Female | |||||||
---|---|---|---|---|---|---|---|---|
Accuracy | 0.573 (43/75) | 0.688 (353/513) | ||||||
Failure | 0.616 | 0.038 | 0.001 | 0.005 | 0.676 | 0.065 | 0.005 | 0.007 |
Success | 0.653 | 0.055 | 0.002 | 0.006 | 0.675 | 0.059 | 0.227 | 0.009 |
Table 6 Accuracy using AAC model for each gender and CR unk, CR emoji, CR face, and CR em
Male | Female | |||||||
---|---|---|---|---|---|---|---|---|
Accuracy | 0.573 (43/75) | 0.688 (353/513) | ||||||
Failure | 0.616 | 0.038 | 0.001 | 0.005 | 0.676 | 0.065 | 0.005 | 0.007 |
Success | 0.653 | 0.055 | 0.002 | 0.006 | 0.675 | 0.059 | 0.227 | 0.009 |
[1] | F. Ren and Z. Huang , “Automatic facial expression learning method based on humanoid robot XIN-REN,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 6, pp. 810-821, Dec. 2016. doi: 10.1109/THMS.2016.2599495. |
[2] | F. Ren and X. Sun , “Current Situation and Development of Intelligence Robots,” ZTE Communications, vol. 14, no. S1, pp. 25-34, 2016. doi: 10.3969/j.issn.1673-5188.2016.S1.005. |
[3] | F. Ren, Y. Wang, C. Quan , “A novel factored POMDP model for affective dialogue management,” Journal of Intelligent and Fuzzy Systems, vol.31, pp. 127-136, 2016. |
[4] | C. Quan and F. Ren , “Automatic annotation of word emotion in sentences based on Ren-CECps,” in Proc.International Conference on Language Resources and Evaluation, Valletta, Malta, 2010, pp. 1146-1151. |
[5] | C. Quan and F. Ren , “Recognizing sentence emotions based on polynomial kernel method using Ren-CECps,” in Proc. 5th IEEE International Conference on Natural Language Processing and Knowledge Engineering, Dalian, China, 2009. doi: 10.1109/NLPKE.2009.5313834. |
[6] | C. Quan and F. Ren , “Recognition of word emotion state in sentences,” IEEJ Transactions on Electrical and Electronic Engineering, vol. 6, no. S1, pp. S34-S51, 2011. doi: 10.1002/tee.20618. |
[7] | C. Quan, F. Ren , “Sentence emotion analysis and recognition based on emotion words using Ren-CECps,” International Journal of Advanced Intelligence, vol. 2, no. 1, pp. 105-117, 2010. |
[8] | L. Wang, C. Quan, Y. Bao, F. Ren , “Construction of a Chinese emotion lexicon from Ren-CECps,” in Lecture Notes in Computer Science, vol. 8589. Berlin/Heidelberg, Germany: Springer, 2014. |
[9] | J. Wang, L.-C. Yu, K. R. Lai, and X. Zhang , “Locally weighted linear regression for cross-lingual valence-arousal prediction of affective words,” Neurocomputing, vol. 194, pp. 271-278, Jun. 2016. doi: 10.1016/j.neucom.2016.02.057. |
[10] | A. Pak and P. Paroubek , “Twitter for sentiment analysis: when language resources are not available,” in 22nd International Conference on Database and Expert Systems Applications, Toulouse, France, 2011. doi: 10.1109/DEXA.2011.86. |
[11] | K. Matsumoto and F. Ren , “Estimation of word emotions based on part of speech and positional information,” Computers in Human Behavior, vol. 27, no. 5, pp. 1553-1564, 2011. |
[12] | D. Bahdanau, K. Cho, Y. Bengio , “Neural machine translation by jointlylearning to align and translate,” presented at ICLR, San Diego, USA, 2015. |
[13] | K. Matsumoto, F. Ren, M. Yoshida, K. Kita , “Refinement by filtering translation candidates and similarity based approach to expand emotion tagged corpus,” Communications in Computer and Information Science (CCIS), vol.631, pp. 260-280, 2017. |
[14] | F. Ren and K. Matsumoto , “Semi-automatic creation of youth slang corpus and its application to affective computing,” IEEE Transactions on Affective Computing, vol. 7, no. 2, 2016. doi: 10.1109/TAFFC.2015.2457915. |
[15] | K. Matsumoto, N. Yoshioka, K. Kita, F. Ren , “Judgment of depressive tendency from emotional fluctuation in weblog,” in Proc. 16th Annual KES Conference on Advances in Knowledge-Based and Intelligent Information and Engineering Systems, San Sebastian, Spain, 2012, pp. 545-554. doi: 10.3233/978-1-61499-105-2-545. |
[16] | F. Ren, C. Quan, K. Matsumoto , “Enriching mental engineering,” International Journal of Innovative Computing, Information and Control, vol. 9, no. 8, 2013. |
[17] | K. Matsumoto, S. Tanaka, M. Yoshida, K. Kita, F. Ren , “Ego-state Estimation from short texts based on sentence distributed representation,” International Journal of Advanced Intelligence (IJAI), vol. 9, no. 2, pp. 145-161, 2017. |
[18] | IBM. (2018, Jan. 4). IBM Watson—personality insights [Online]. Available: https://www.ibm.com/watson/services/personality-insights |
[19] | B. Eisner, T. Rocktäschel, I. Augenstein, M. Bošnjak, S. Riedel , “emoji2vec: learning emoji representations from their description,” in Proc. 4th International Workshop on Natural Language Processing for Social Media at EMNLP, Austin, USA, 2016. |
[20] | Twifile. (2018, Jan. 4). Twifile [Online]. Available: http://twpf.jp |
[21] | K. Matsumoto, M. Yoshida, K. Kita, Y. Wu, I. Fumihiro , “Effect of users attribute on emotion estimation from twitter, ” in Proc. 2nd IEEE International Conference on Computer and Communications, Chengdu, China, 2016, vol. 3, pp. 1186-1190. |
[22] | P. Bojanowski, E. Grave, A. Joulin, T. Mikolov , “Enriching word vectors with subword information, ” arXiv preprint arXiv: 1607.04606, 2016. |
[23] | A. Joulin, E. Grave, P. Bojanowski, T. Mikolov , “Bag of tricks for efficient text classification, ” arXiv preprint arXiv:1607.01759, 2016. |
[24] | A. Joulin, E. Grave, P. Bojanowski , et al., “FastText.zip: compressing text classification models, ” arXiv preprint arXiv:1612.03651, 2016. |
No related articles found! |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||