Prof. YANG’s work focuses on prognostic health monitoring and robotics technologies for intelligent safety monitoring in smart cities. Fundamental research studies data-driven condition monitoring of electromechanical equipment in the Internet of Things environment with a focus on multimodal signals processing, intelligent diagnosis, and resilience dynamic monitoring. Critical research on robotics includes machine vision-based perception, 3D shape recognition, and agile robot control for safety monitoring applications.
Research areas include:
The physically meaningful diagnostic model is proposed by using gradient-weighted class activation mapping (Grad-CAM) to guide the model in focusing on the same frequency bands of the input spectra and ignoring other parts of noisy and irrelevant signals. This model not only reveals the focused area of the vibration spectra to help users understand the decision-making process, but also possesses a better anti-noise ability.
Our group utilize machine learning-based nonlinear noise reduction and dimensionality reduction methods and “wavelet filtering + empirical mode decomposition” to extract features from signals, and for the first time proposes the use of hierarchical output cascade stochastic forest integrated learning model. This work has been published in the journal ISA Transaction. The generalized framework of this method has been applied to the fault monitoring and early warning system of the tunnel ventilators of the Hong Kong-Zhuhai-Macao Bridge.
Our group propose a novel and fast capsule network-based approach to realize the diagnosis of complex faults in rotating machines. This method introduces dynamic pruning and dynamic routing modules to improve the training efficiency of the network. By introducing consistency constraints on the same layer of capsules, the consistency evaluation index of the dynamic routing algorithm is improved to avoid the homogeneity of capsule layers. In addition, a composite loss function consisting of supervised loss and unsupervised loss is proposed. Unsupervised loss is combined with multi-scale mutual information to increase the reconciled mean of local and full domain mutual information to improve the accuracy of fault identification.
Focusing on the machine vision perception problem of poor point cloud recognition accuracy, sparse point clouds and incomplete point clouds, we propose the point cloud learning enhancement method based on maximizing the cross-layer feature mutual information. Our approach achieves performance improvements for most generalized point cloud learning networks without additional data enhancements.
When the safety monitoring robot is in the tunnel pipe gallery and other scenes, it may encounter the challenge of insufficient illumination and occluded objects. Insufficient illumination can cause the vision sensor to capture images with low brightness and noise. Occluded objects make the image contain only limited object information, thus reducing the accuracy of the object recognition algorithm. To solve these difficulties, an adaptive low-light enhancement algorithm and an occlusion-aware object pose estimation algorithm are proposed to deal with image degradation and object feature limitation, thereby providing robust data support for subsequent robot operations.
Implementing robotic safety monitoring functions through manual teaching is labor-intensive and inefficient. In our work, the progress incentive mechanism in human learning from psychology is introduced to reinforcement learning, thereby overcoming the poor training performance caused by uniform sampling and scarcity of reward information. Without manual teaching, the robotic arm can achieve robust safety monitoring functions in industrial scenarios.