Person identification is the key to enable personalized services in smart homes, including the smart voice assistant, augmented reality, and targeted advertisement. Although research in the past decades in person identification has brought technologies with high accuracy, existing solutions either require explicit user interaction or rely on images and video processing, and thus suffer from cost and privacy limitations. In this paper, we introduce a device-free personal identification system–HiddenTag, which utilizes smartphones to identify different users via profiling indoor activities with inaudible sound and channel state information (CSI). HiddenTag sends inaudible sound and senses its diffraction and multi-path reflection using smartphones. Based upon the multi-path effects and human body absorption, we design suitable sound signals and acoustic features for constructing the human body signatures. In addition, we use CSI to trigger the system of acoustic sensing. Extensive experiments indicate that HiddenTag can distinguish multi-person in 10–15 s with 95.1% accuracy. We implement a prototype of HiddenTag with an online system by Android smartphones and maintain 84%–90% online accuracy.
Due to the function of gestures to convey information, gesture recognition plays a more and more important part in human-computer interaction. Traditional methods to recognize gestures are mostly device-based, which means users need to contact the devices. To overcome the inconvenience of the device-based methods, studies on device-free gesture recognition have been conducted. However, computer vision methods bring privacy issues and light interference problems. Therefore, we turn to wireless technology. In this paper, we propose a device-free in-air gesture recognition method based on radio frequency identification (RFID) tag array. By capturing the signals reflected by gestures, we can extract the gesture features. For dynamic gestures, both temporal and spatial features need to be considered. For static gestures, spatial feature is the key, for which a neural network is adopted to recognize the gestures. Experiments show that the accuracy of dynamic gesture recognition on the test set is 92.17%, while the accuracy of static ones is 91.67%.
With the rapid development of 5G technology, more and more attention has been attracted to mmWave sensing. As an emerging sensing medium, mmWave has the advantages of both high sensitivity and precision. Different from its networking applications, the core method of mmWave sensing is to analyze the reflected signal changes containing the relevant information of different surrounding environments. In this paper, we conduct a systemic review for mmWave sensing. We first summarize the prior works on environmental sensing with different signal analysis methods. Then, we classify and discuss the work of sensing humans, including their behavior and gestures. Finally, we discuss and put forward more possibilities of mmWave human perception.
Mobile edge users (MEUs) collect data from sensor devices and report to cloud systems, which can facilitate numerous applications in sensor?cloud systems (SCS). However, because there is no effective way to access the ground truth to verify the quality of sensing devices’ data or MEUs’ reports, malicious sensing devices or MEUs may report false data and cause damage to the platform. It is critical for selecting sensing devices and MEUs to report truthful data. To tackle this challenge, a novel scheme that uses unmanned aerial vehicles (UAV) to detect the truth of sensing devices and MEUs (UAV?DT) is proposed to construct a clean data collection platform for SCS. In the UAV?DT scheme, the UAV delivers check codes to sensor devices and requires them to provide routes to the specified destination node. Then, the UAV flies along the path that enables maximal truth detection and collects the information of the sensing devices forwarding data packets to the cloud during this period. The information collected by the UAV will be checked in two aspects to verify the credibility of the sensor devices. The first is to check whether there is an abnormality in the received and sent data packets of the sensing devices and an evaluation of the degree of trust is given; the second is to compare the data packets submitted by the sensing devices to MEUs with the data packets submitted by the MEUs to the platform to verify the credibility of MEUs. Then, based on the verified trust value, an incentive mechanism is proposed to select credible MEUs for data collection, so as to create a clean data collection sensor?cloud network. The simulation results show that the proposed UAV?DT scheme can identify the trust of sensing devices and MEUs well. As a result, the proportion of clean data collected is greatly improved.
Degeneration of joint disease is one of the problems that threaten global public health. Currently, the therapies of the disease are mainly conservative but not very effective. To solve the problem, we need to find effective, convenient and inexpensive therapies. With the rapid development of artificial intelligence, we innovatively propose to combine Traditional Chinese Medicine (TCM) with artificial intelligence to design a rehabilitation assessment system based on TCM Daoyin. Our system consists of four subsystems: the spine movement assessment system, the posture recognition and correction system, the background music recommendation system, and the physiological signal monitoring system. We incorporate several technologies such as keypoint detection, posture estimation, heart rate detection, and deriving respiration from electrocardiogram (ECG) signals. Finally, we integrate the four subsystems into a portable wireless device so that the rehabilitation equipment is well suited for home and community environment. The system can effectively alleviate the problem of an inadequate number of physicians and nurses. At the same time, it can promote our TCM culture as well.
Intelligent perception technology of sensors in autonomous vehicles has been deeply integrated with the algorithm of autonomous driving. This paper provides a survey of the impact of sensing technologies on autonomous driving, including the intelligent perception reshaping the car architecture from distributed to centralized processing and the common perception algorithms being explored in autonomous driving vehicles, such as visual perception, 3D perception and sensor fusion. The pure visual sensing solutions have shown the powerful capabilities in 3D perception leveraging the latest self-supervised learning progress, compared with light detection and ranging (LiDAR)-based solutions. Moreover, we discuss the trends on end-to-end policy decision models of high-level autonomous driving technologies.
Quality of Experience (QoE) is used to monitor the user experience of telecommunication services, which has been studied for a long time. In universal terrestrial radio access network (UTRAN), evolved UTRAN (E-UTRA) and Long Term Evolution (LTE), QoE has also been specified for the improvement of user experience. The 5G New Radio (NR) technology is designed for providing various types of new services, and therefore operators have strong demand to continuously upgrade the 5G network to provide sufficient and good QoE for corresponding services. With new emerging 5G services, 5G QoE management collection aims at specifying the mechanism to collect the experience parameters for the multimedia telephony service for IP multimedia subsystem (IMS), multimedia broadcast and multicast service (MBMS), virtual reality (VR), etc. Taking LTE QoE as a baseline, generic NR QoE management mechanisms for activation, deactivation, configuration, and reporting of QoE measurement are introduced in this paper. Additionally, some enhanced QoE features in NR are discussed, such as radio access network (RAN) overload handling, RAN-visible QoE, per-slice QoE measurement, radio-related measurement, and QoE continuity for mobility. This paper also introduces solutions to NR QoE, which concludes the progress of NR QoE in the 3rd Generation Partnership Project (3GPP).
With the vigorous development of mobile networks, the number of devices at the network edge is growing rapidly and the massive amount of data generated by the devices brings a huge challenge of response latency and communication burden. Existing resource monitoring systems are widely deployed in cloud data centers, but it is difficult for traditional resource monitoring solutions to handle the massive data generated by thousands of edge devices. To address these challenges, we propose a super resolution sensing (SRS) method for distributed resource monitoring, which can be used to recover reliable and accurate high-frequency data from low-frequency sampled resource monitoring data. Experiments based on the proposed SRS model are also conducted and the experimental results show that it can effectively reduce the errors generated when recovering low-frequency monitoring data to high-frequency data, and verify the effectiveness and practical value of applying SRS method for resource monitoring on edge clouds.
The design concept of semiconductor optical amplifier (SOA) and gain chip used in wavelength tunable lasers (TL) is discussed in this paper. The design concept is similar to that of a conventional SOA or a laser; however, there are a few different points. An SOA in front of the tunable laser should be polarization dependent and has low optical confinement factor. To obtain wide gain bandwidth at the threshold current, the gain chip used in the tunable laser cavity should be something between SOA and fixed-wavelength laser design, while the fixed-wavelength laser has high optical confinement factor. Detailed discussion is given with basic equations and some simulation results on saturation power of the SOA and gain bandwidth of gain chip are shown.
One particular challenge for large?scale software systems is anomaly detection. System logs are a straightforward and common source of information for anomaly detection. Existing log?based anomaly detectors are unusable in real?world industrial systems due to high false?positive rates. In this paper, we incorporate human feedback to adjust the detection model structure to reduce false positives. We apply our approach to two industrial large?scale systems. Results have shown that our approach performs much better than state?of?the-art works with 50% higher accuracy. Besides, human feedback can reduce more than 70% of false positives and greatly improve detection precision.