ZTE Communications ›› 2021, Vol. 19 ›› Issue (3): 13-21.DOI: 10.12142/ZTECOM.202103003
• Special Topic • Previous Articles Next Articles
WU Jiaying, WANG Chuyu(), XIE Lei
Received:
2021-06-10
Online:
2021-09-25
Published:
2021-10-11
About author:
WU Jiaying is a Ph.D. student in the Department of Computer Science and Technology, Nanjing University, China, supervised by Prof. XIE Lei and WANG Chuyu. Her research interests include smart sensing and RFID.|WANG Chuyu (Supported by:
WU Jiaying, WANG Chuyu, XIE Lei. Device-Free In-Air Gesture Recognition Based on RFID Tag Array[J]. ZTE Communications, 2021, 19(3): 13-21.
Add to citation manager EndNote|Ris|BibTeX
URL: https://zte.magtechjournal.com/EN/10.12142/ZTECOM.202103003
Layer | Description | |
---|---|---|
CNN part | Input layer | Input: feature image sequence Length of sequence: 5 Image size: 15×21 |
Convolution layer-1 | Extract Kernel size: 3×3× Step size: 1 Activation function: ReLU | |
Pooling layer-1 | Downsample the extracted features Pooling type: max pooling Template size: 2×2 Step size: 2 | |
Convolution layer-2 | Extract Kernel size: 3×3× Step size: 1 Activation function: ReLU | |
Pooling layer-2 | Downsample the extracted features Pooling type: max pooling Template size: 2×2 Step size: 2 | |
Fully connected layer | Fully connect the features to | |
LSTM part | LSTM layer | Extract features from summary vector sequence Time steps: 5 Number of hidden units: |
Fully connected layer | Fully connect the features to six-dimensional prediction vector |
Table 1 CNN-LSTM structure
Layer | Description | |
---|---|---|
CNN part | Input layer | Input: feature image sequence Length of sequence: 5 Image size: 15×21 |
Convolution layer-1 | Extract Kernel size: 3×3× Step size: 1 Activation function: ReLU | |
Pooling layer-1 | Downsample the extracted features Pooling type: max pooling Template size: 2×2 Step size: 2 | |
Convolution layer-2 | Extract Kernel size: 3×3× Step size: 1 Activation function: ReLU | |
Pooling layer-2 | Downsample the extracted features Pooling type: max pooling Template size: 2×2 Step size: 2 | |
Fully connected layer | Fully connect the features to | |
LSTM part | LSTM layer | Extract features from summary vector sequence Time steps: 5 Number of hidden units: |
Fully connected layer | Fully connect the features to six-dimensional prediction vector |
Figure 7 Cross validation of hyperparameters on (a) convolution kernel depth, (b) dimensionality of the fully connected layer, and (c) the number of hidden units
1 |
JIANG S, GAO Q, LIU H, et al. A novel, co‑located EMG‑FMG‑sensing wearable armband for hand gesture recognition [J]. Sensors and actuators a: physical, 2020, 301: 111738. DOI: 10.1016/j.sna.2019.111738
DOI |
2 |
XIE R, CAO J. Accelerometer‑based hand gesture recognition by neural network and similarity matching [J]. IEEE sensors journal, 2016, 16(11): 4537–4545. DOI: 10.1109/JSEN.2016.2546942
DOI |
3 |
SUN Y, WENG Y, LUO B, et al. Gesture recognition algorithm based on multi‑scale feature fusion in RGB‑D images [J]. IET image processing, 2020. DOI: 10.1049/iet-ipr.2020.0148
DOI |
4 |
WANG C, LIU Z, CHAN S C. Superpixel‑based hand gesture recognition with kinect depth camera [J]. IEEE transactions on multimedia, 2014, 17(1): 29–39. DOI: 10.1109/TMM.2014.2374357
DOI |
5 |
MITRA S, ACHARYA T. Gesture recognition: A survey [J]. IEEE transactions on systems, man, and cybernetics, part C (applications and reviews), 2007, 37(3): 311–324. DOI: 10.1109/TSMCC.2007.893280
DOI |
6 |
RAUTARAY S S, AGRAWAL A. Vision based hand gesture recognition for human computer interaction: a survey [J]. Artificial intelligence review, 2015, 43(1): 1–54. DOI: 10.1007/s10462-012-9356-9
DOI |
7 |
BHASKARAN K A, NAIR A G, RAM K D, et al. Smart gloves for hand gesture recognition: sign language to speech conversion system [C]//International Conference on Robotics and Automation for Humanitarian Applications (RAHA). Coimbatore, India: IEEE, 2016: 1–6. DOI: 10.1109/RAHA.2016.7931887
DOI |
8 |
CHEN B, ZHANG Q, ZHAO R, et al. SGRS: a sequential gesture recognition system using COTS RFID [C]//IEEE Wireless Communications and Networking Conference (WCNC). Barcelona, Spain: IEEE, 2018: 1–6. DOI: 10.1109/WCNC.2018.8376998
DOI |
9 |
CHENG K, YE N, MALEKIAN R, et al. In‑air gesture interaction: real time hand posture recognition using passive RFID tags [J]. IEEE access, 2019, 7: 94460–94472. DOI: 10.1109/ACCESS.2019.2928318
DOI |
10 |
HONGO H, OHYA M, YASUMOTO M, et al. Focus of attention for face and hand gesture recognition using multiple cameras [C]//Fourth IEEE International Conference on Automatic Face and Gesture Recognition Cat. No. PR00580. Grenoble, France: IEEE, 2000: 156–161. DOI: 10.1109/AFGR.2000.840627
DOI |
11 |
WANG W, LIU A X, SUN K. Device‑free gesture tracking using acoustic signals [C]//22nd Annual International Conference on Mobile Computing and Networking. New York, USA: ACM, 2016: 82–94. DOI: 10.1145/2973750.2973764
DOI |
12 |
ABDELNASSER H, YOUSSEF M, HARRAS K A. Wigest: a ubiquitous wifi‑based gesture recognition system [C]//IEEE Conference on Computer Communications (INFOCOM). Hong Kong, China: IEEE, 2015: 1472–1480. DOI: 10.1109/INFOCOM.2015.7218525
DOI |
13 |
LIU H, WANG Y, ZHOU A, et al. Real‑time arm gesture recognition in smart home scenarios via millimeter wave sensing [J]. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, 2020, 4(4): 1–28. DOI: 10.1145/3432235
DOI |
14 |
DING H, QIAN C, HAN J, et al. Rfipad: enabling cost‑efficient and device‑free in‑air handwriting using passive tags [C]//37th International Conference on Distributed Computing Systems (ICDCS). Atlanta, USA: IEEE, 2017: 447–457. DOI: 10.1109/ICDCS.2017.141
DOI |
15 |
LIU J, CHEN X, CHEN S, et al. TagSheet: sleeping posture recognition with an unobtrusive passive tag matrix [C]//5th IEEE Conference on Computer Communications. Chengdu, China: IEEE, 2019: 874–882. DOI: 10.1109/INFOCOM.2019.8737599
DOI |
16 |
DOBKIN D M. The RF in RFID: passive UHF RFID in practice [M]. Oxford, UK: Newnes, 2007: 1–493. DOI: 10.1016/B978-0-7506-8209-1.X5001-3
DOI |
17 |
WANG C, LIU J, CHEN Y, et al. Multi‑touch in the air: device‑free finger tracking and gesture recognition via cots RFID [C]//IEEE Conference on Computer Communications. Honolulu, USA: IEEE, 2018: 1691–1699. DOI: 10.1109/INFOCOM.2018.8486346
DOI |
18 | PASCANU R, MIKOLOV T, BENGIO Y. On the difficulty of training recurrent neural networks [C]//International Conference on Machine Learning. Atlanta, USA: PMLR, 2013: 1310–1318 |
[1] | FENG Bingyi, FENG Mingxiao, WANG Minrui, ZHOU Wengang, LI Houqiang. Multi-Agent Hierarchical Graph Attention Reinforcement Learning for Grid-Aware Energy Management [J]. ZTE Communications, 2023, 21(3): 11-21. |
[2] | ZHANG Chenchen, ZHANG Nan, CAO Wei, TIAN Kaibo, YANG Zhen. AI-Based Optimization of Handover Strategy in Non-Terrestrial Networks [J]. ZTE Communications, 2021, 19(4): 98-104. |
[3] | Julian AHRENS, Lia AHRENS, Hans D. SCHOTTEN. A Machine Learning Method for Prediction of Multipath Channels [J]. ZTE Communications, 2019, 17(4): 12-18. |
[4] | XUE Songyan, LI Ang, WANG Jinfei, YI Na, MA Yi, Rahim TAFAZOLLI, Terence DODGSON. To Learn or Not to Learn:Deep Learning Assisted Wireless Modem Design [J]. ZTE Communications, 2019, 17(4): 3-11. |
[5] | WANG Shihao, ZHUO Qinzheng, YAN Han, LI Qianmu, QI Yong. A Network Traffic Prediction Method Based on LSTM [J]. ZTE Communications, 2019, 17(2): 19-25. |
[6] | ZHENG Xiaoqing, LU Yaping, PENG Haoyuan, FENG Jiangtao, ZHOU Yi, JIANG Min, MA Li, ZHANG Ji, JI Jie. Detecting Abnormal Start-Ups, Unusual Resource Consumptions of the Smart Phone: A Deep Learning Approach [J]. ZTE Communications, 2019, 17(2): 38-43. |
[7] | ZHENG Xiaoqing, CHEN Jun, SHANG Guoqiang. Deep Neural Network-Based Chinese Semantic Role Labeling [J]. ZTE Communications, 2017, 15(S2): 58-64. |
[8] | Yongsheng Liu, Yansong Yang, Chang Liu, Yu Gu. Forest Fire Detection Using Artificial Neural Network Algorithm Implemented in Wireless Sensor Networks [J]. ZTE Communications, 2015, 13(2): 12-16. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||