ZTE Communications ›› 2019, Vol. 17 ›› Issue (2): 38-43.DOI: 10.12142/ZTECOM.201902006
• Special Topic • Previous Articles Next Articles
ZHENG Xiaoqing1, LU Yaping2, PENG Haoyuan1, FENG Jiangtao1, ZHOU Yi1, JIANG Min2, MA Li2, ZHANG Ji2, JI Jie2
Received:
2018-01-17
Online:
2019-06-11
Published:
2019-11-14
About author:
ZHENG Xiaoqing (ZHENG Xiaoqing, LU Yaping, PENG Haoyuan, FENG Jiangtao, ZHOU Yi, JIANG Min, MA Li, ZHANG Ji, JI Jie. Detecting Abnormal Start-Ups, Unusual Resource Consumptions of the Smart Phone: A Deep Learning Approach[J]. ZTE Communications, 2019, 17(2): 38-43.
Add to citation manager EndNote|Ris|BibTeX
URL: https://zte.magtechjournal.com/EN/10.12142/ZTECOM.201902006
Event | Argument list |
---|---|
am_proc_start | Package; component; start type |
am_proc_died | package |
Table 1 Events and arguments
Event | Argument list |
---|---|
am_proc_start | Package; component; start type |
am_proc_died | package |
Features | Comments |
---|---|
Event | am_proc_start / am_proc_died |
Package | which package to start |
Component | which component to start |
Start type | the startup mode |
R5 | whether R5 is satisfied |
Table 2 Features extracted from original logs to be inputted into the deep learning model
Features | Comments |
---|---|
Event | am_proc_start / am_proc_died |
Package | which package to start |
Component | which component to start |
Start type | the startup mode |
R5 | whether R5 is satisfied |
Hyper-parameter | Value |
---|---|
Size of the LSTM output | 2 × 50 |
Size of the hidden layer in classifier | 50 |
Learning rate | 0.001 |
Batch size | 32 |
Table 3 Model hyper-parameter configuration
Hyper-parameter | Value |
---|---|
Size of the LSTM output | 2 × 50 |
Size of the hidden layer in classifier | 50 |
Learning rate | 0.001 |
Batch size | 32 |
CPU | Memory | |
---|---|---|
Precision | 0.963 | 0.972 |
Recall | 0.983 | 0.998 |
F1-score | 0.973 | 0.985 |
Table 4 Performance of unusual CPU and memory consumption detection model
CPU | Memory | |
---|---|---|
Precision | 0.963 | 0.972 |
Recall | 0.983 | 0.998 |
F1-score | 0.973 | 0.985 |
Rules | Number of logs labelled by each rule |
---|---|
R1 | 95β138 |
R2 | 269β348 |
R3 | 390β203 |
R4 | 19β557 |
R5 | 82β912 |
R6 | 207β881 |
R4, R5, R6 | 310β350 |
Total | 1β065β039 |
Table 5 The statistics of the labelled logs
Rules | Number of logs labelled by each rule |
---|---|
R1 | 95β138 |
R2 | 269β348 |
R3 | 390β203 |
R4 | 19β557 |
R5 | 82β912 |
R6 | 207β881 |
R4, R5, R6 | 310β350 |
Total | 1β065β039 |
Data set | Accuracy (%) |
---|---|
Deep learning model (for the part with future information involved) | 98.93 |
Hybrid model | 99.69 |
Table 6 The performance of our models
Data set | Accuracy (%) |
---|---|
Deep learning model (for the part with future information involved) | 98.93 |
Hybrid model | 99.69 |
[1] | BENGIO Y . Learning Deep Architectures for AI[J]. Foundations and Trends in Machine Learning, 2009,2(1):1-127. DOI: 10.1561/2200000006 |
[2] | LECUN Y, BENGIO Y, HINTON G . Deep Learning[J]. Nature, 2015,521(7553):436-444. DOI: 10.1038/nature14539 |
[3] | SCHMIDHUBER J . Deep Learning in Neural Networks: An Overview[J]. Neural Networks, 2015,61:85-117. DOI: 10.1016/j.neunet.2014.09.003 |
[4] | HOFFMAN J, TZENG E, DONAHUE J, et al. One-Shot Adaptation of Supervised Deep Convolutional Models [EB/OL]. (2013-12-21)[2018-04-15]. http://arxiv.org/abs/1312.6204 |
[5] | KINGMA D P, WELLING M. Auto-Encoding Variational Bayes [EB/OL]. (2013-12-20)[2018-04-15]. |
[6] | KRIZHEVSKY A, SUTSKEVER I, HINTON G E . ImageNet Classification with Deep Convolutional Neural Networks[J]. Communications of the ACM, 2017,60(6):84-90. DOI: 10.1145/3065386 |
[7] | MIKOLOV T, SUTSKEVER I, DEORAS A, et al. Subword Language Modelling with Neural Networks [EB/OL]. (2012) [ 2018- 04- 15]. http://www.fit.vutbr.cz/~imikolov/rnnlm/char.pdf |
[8] | Graves A. Generating Sequences With Recurrent Neural Networks [EB/OL]. (2013 -08-04) [2018-04-15]. https://arxiv.org/abs/1308.0850 |
[9] | CHO K, MERRIENBOER B van, GULCEHRE C, et al. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation [C]//Conference on Empirical Methods in Natural Language Processing. Doha, Qatar, 2014. DOI: 10.3115/v1/D14-1179 |
[10] | SUTSKEVER I, VINYALS O L, Le Q V. Sequence to Sequence Learning with Neural Networks [M] //Advances in Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2014: 3104-3112 |
[11] | VINYALS O, TOSHEV A, BENGIO S, et al. Show and Tell: A Neural Image Caption Generator [C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA, 2015: 3156-3164. DOI: 10.1109/CVPR.2015. 7298935 |
[12] | MOZER M C. Induction of Multiscale Temporal Structure [C]//Proc. 4th International Conference on Neural Information Processing Systems. San Francisco, USA: Morgan Kaufmann Publishers Inc., 1991: 275-282. |
[13] | HIHI S E, BENGIO Y. Hierarchical Recurrent Neural Networks for Long-Term Dependencies [C]//Proc. 8th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 1995: 493-499 |
[14] | LIN T, HORNE B G, TINO P ,et al. Learning Long-Term Dependencies in NARX Recurrent Neural Networks[J]. IEEE Transactions on Neural Networks, 1996,7(6):1329-1338. DOI: 10.1109/72.548162 |
[15] | KOUTNÍK J, GREFF K, GOMEZ F, et al. A Clockwork RNN [C]//31st International Conference on Machine Learning. Beijing, China, 2014: 1863-1871 |
[16] | AHMED N K, ATIYA A F, GAYAR N E ,et al. An Empirical Comparison of Machine Learning Models for Time Series Forecasting[J]. Econometric Reviews, 2010,29(5/6):594-621. DOI: 10.1080/07474938.2010.481556 |
[17] | BONTEMPI G, BEN TAIEB S, LE BORGNE Y A. Machine Learning Strategies for Time Series Forecasting[M] //BONTEMPI G, BEN TAIEB S, LE BORGNE Y A. eds. Business Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013: 62-77. DOI: 10.1007/978-3-642-36318-4_3 |
[18] | ROBINSON A J, FALLSIDE F . The Utility Driven Dynamic Error Propagation Network: CUED/FINFENG/TR.1 [R]. Cambridge, UK: Cambridge University, Engineering Department, 1987 |
[19] | WERBOS P J . Generalization of Backpropagation with Application to a Recurrent Gas Market Model[J]. Neural Networks, 1988,1(4):339-356. DOI: 10.1016/0893-6080(88)90007-x |
[20] | Williams R J . Complexity of Exact Gradient Computation Algorithms for Recurrent Neural Networks: NUCCS-89-27 [R]. Boston USA: Northeastern University, College of Computer Science, 1989 |
[21] | Hochreiter S, Bengio Y, Frasconi P, et al. Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies [M] // Kremer S C, Kolen, J F eds. A Field Guide to Dynamical Recurrent Networks. Hoboken, USA: IEEE Press, 2001 |
[22] | HOCHREITER S, SCHMIDHUBER J . Long Short-Term Memory[J]. Neural Computation, 1997,9(8):1735-1780. DOI: 10.1162/neco.1997.9.8.1735 |
[23] | MARTENS J, SUTSKEVER I. Learning Recurrent Neural Networks with Hessian-Free Optimization [C]//28th International Conference on Machine Learning. Bellevue, USA, 2011: 1033-1040 |
[24] | SUTSKEVER I, MARTENS J, DAHL G E, et al. On the Importance of Initialization and Momentum in Deep Learning [C]//30th International Conference on Machine Learning. Atlanta, USA, 2013: 1139-1147 |
[25] | GERS F A, SCHRAUDOLPH N N, SCHMIDHUBER J . Learning Precise Timing with LSTM Recurrent Networks[J]. Journal of Machine Learning Research, 2002,3:115-143 |
[26] | KINGMA D, Ba J. Adam: A Method for Stochastic Optimization[EB/OL]. (2014-12-22) [2018-04-15] |
[1] | WANG Chongchong, LI Yao, WANG Beibei, CAO Hong, ZHANG Yanyong. Point Cloud Processing Methods for 3D Point Cloud Detection Tasks [J]. ZTE Communications, 2023, 21(4): 38-46. |
[2] | GONG Panyin, ZHANG Guidong, ZHANG Zhigang, CHEN Xiao, DING Xuan. Research on Fall Detection System Based on Commercial Wi-Fi Devices [J]. ZTE Communications, 2023, 21(4): 60-68. |
[3] | DENG Letian, ZHAO Yanru. Deep Learning-Based Semantic Feature Extraction: A Literature Review and Future Directions [J]. ZTE Communications, 2023, 21(2): 11-17. |
[4] | LU Ping, SHENG Bin, SHI Wenzhe. Scene Visual Perception and AR Navigation Applications [J]. ZTE Communications, 2023, 21(1): 81-88. |
[5] | FAN Guotian, WANG Zhibin. Intelligent Antenna Attitude Parameters Measurement Based on Deep Learning SSD Model [J]. ZTE Communications, 2022, 20(S1): 36-43. |
[6] | GAO Zhengguang, LI Lun, WU Hao, TU Xuezhen, HAN Bingtao. A Unified Deep Learning Method for CSI Feedback in Massive MIMO Systems [J]. ZTE Communications, 2022, 20(4): 110-115. |
[7] | ZHANG Jintao, HE Zhenqing, RUI Hua, XU Xiaojing. Spectrum Sensing for OFDMA Using Multicarrier Covariance Matrix Aware CNN [J]. ZTE Communications, 2022, 20(3): 61-69. |
[8] | HE Hongye, YANG Zhiguo, CHEN Xiangning. Payload Encoding Representation from Transformer for Encrypted Traffic Classification [J]. ZTE Communications, 2021, 19(4): 90-97. |
[9] | ZHANG Chenchen, ZHANG Nan, CAO Wei, TIAN Kaibo, YANG Zhen. AI-Based Optimization of Handover Strategy in Non-Terrestrial Networks [J]. ZTE Communications, 2021, 19(4): 98-104. |
[10] | Julian AHRENS, Lia AHRENS, Hans D. SCHOTTEN. A Machine Learning Method for Prediction of Multipath Channels [J]. ZTE Communications, 2019, 17(4): 12-18. |
[11] | XUE Songyan, LI Ang, WANG Jinfei, YI Na, MA Yi, Rahim TAFAZOLLI, Terence DODGSON. To Learn or Not to Learn:Deep Learning Assisted Wireless Modem Design [J]. ZTE Communications, 2019, 17(4): 3-11. |
[12] | ZHENG Xiaoqing, CHEN Jun, SHANG Guoqiang. Deep Neural Network-Based Chinese Semantic Role Labeling [J]. ZTE Communications, 2017, 15(S2): 58-64. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||