publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2023
- USENIXAIRS Explanation for Deep Reinforcement Learning based Security ApplicationsJiahao Yu, Wenbo Guo, Qi Qin, and 3 more authorsIn Proceedings of the 2023 USENIX Security 2023
Recently, we have witnessed the success of deep reinforcement learning (DRL) in many security applications, ranging from malware mutation to selfish blockchain mining. Like all other machine learning methods, the lack of explainability has been limiting its broad adoption as users have difficulty establishing trust in DRL models’ decisions. Over the past years, different methods have been proposed to explain DRL models but unfortunately, they are often not suitable for security applications, in which explanation fidelity, efficiency, and the capability of model debugging are largely lacking. In this work, we propose AIRS, a general framework to explain deep reinforcement learning-based security applications. Unlike previous works that pinpoint important features to the agent’s current action, our explanation is at the step level. It models the relationship between the final reward and the key steps that a DRL agent takes, and thus outputs the steps that are most critical towards the final reward the agent has gathered. Using four representative security-critical applications, we evaluate AIRS from the perspectives of explainability, fidelity, stability, and efficiency. We show that AIRS could outperform alternative explainable DRL methods. We also showcase AIRS’s utility, demonstrating that our explanation could facilitate the DRL model’s failure offset, help users establish trust in a model decision, and even assist the identification of inappropriate reward designs.
2022
2021
- TMCMatrix Gaussian Mechanisms for Differentially-Private LearningJungang Yang, Liyao Xiang, Jiahao Yu, and 4 more authorsIn IEEE Transactions on Mobile Computing 2021
The wide deployment of machine learning algorithms has become a severe threat to user data privacy. As the learning data is of high dimensionality and high orders, preserving its privacy is intrinsically hard. Conventional differential privacy mechanisms often incur significant utility decline as they are designed for scalar values from the start. We recognize that it is because conventional approaches do not take the data structural information into account, and fail to provide sufficient privacy or utility. As the main novelty of this work, we propose Matrix Gaussian Mechanism (MGM), a new (, δ)-differential privacy mechanism for preserving learning data privacy. By imposing the unimodal distributions on the noise, we introduce two mechanisms based on MGM with an improved utility. We further show that with the utility space available, the proposed mechanisms can be instantiated with optimized utility, and has a closed-form solution scalable to large-scale problems. We experimentally show that our mechanisms, applied to privacy-preserving federated learning, are superior than the state-of-the-art differential privacy mechanisms in utility.
@inproceedings{yang2021matrix, author = {Yang, Jungang and Xiang, Liyao and Yu, Jiahao and Wang, Xinbing and Guo, Bin and Li, Zhetao and Li, Baochun}, booktitle = {IEEE Transactions on Mobile Computing}, title = {Matrix Gaussian Mechanisms for Differentially-Private Learning}, year = {2021}, }
- CIKMSpeedup robust graph structure learning with low-rank informationHui Xu, Liyao Xiang, Jiahao Yu, and 2 more authorsIn Proceedings of the 30th ACM International Conference on Information & Knowledge Management 2021
@inproceedings{xu2021speedup, title = {Speedup robust graph structure learning with low-rank information}, author = {Xu, Hui and Xiang, Liyao and Yu, Jiahao and Cao, Anqi and Wang, Xinbing}, booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management}, pages = {2241--2250}, year = {2021} }
2020
- INFOCOMVoiceprint mimicry attack towards speaker verification system in smart homeLei Zhang, Yan Meng, Jiahao Yu, and 3 more authorsIn Proceedings of IEEE INFOCOM 2020
The advancement of voice controllable systems (VC-Ses) has dramatically affected our daily lifestyle and catalyzed the smart home’s deployment. Currently, most VCSes exploit automatic speaker verification (ASV) to prevent various voice attacks (e.g., replay attack). In this study, we present VMask, a novel and practical voiceprint mimicry attack that could fool ASV in smart home and inject the malicious voice command disguised as a legitimate user. The key observation behind VMask is that the deep learning models utilized by ASV are vulnerable to the subtle perturbations in the voice input space. To generate these subtle perturbations, VMask leverages the idea of adversarial examples. Then by adding the subtle perturbations to the recordings from an arbitrary speaker, VMask can mislead the ASV into classifying the crafted speech samples, which mirror the former speaker for human, as the targeted victim. Moreover, psychoacoustic masking is employed to manipulate the adversarial perturbations under human perception threshold, thus making victim unaware of ongoing attacks. We validate the effectiveness of VMask by performing comprehensive experiments on both grey box (VGGVox) and black box (Microsoft Azure Speaker Verification) ASVs. Additionally, a real-world case study on Apple HomeKit proves the VMask’s practicability on smart home platforms.
@inproceedings{zhang2020voiceprint, title = {Voiceprint mimicry attack towards speaker verification system in smart home}, author = {Zhang, Lei and Meng, Yan and Yu, Jiahao and Xiang, Chong and Falk, Brandon and Zhu, Haojin}, booktitle = {Proceedings of IEEE INFOCOM}, pages = {377--386}, year = {2020}, organization = {IEEE}, }
2019
- arXivInvisible backdoor attacks against deep neural networksShaofeng Li, Benjamin Zi Hao Zhao, Jiahao Yu, and 3 more authorsIn 2019