Zhi Chen

Logo


    Ph.D. Student
    Computer Science
    University of Illinois at Urbana-Champaign

    CONTACT INFO

    Address: Thomas M. Siebel Center
                     201 North Goodwin Avenue
                     Urbana, IL 61801
    Email: zhic4@illinois.edu

Google Scholar

About

I am a first year Ph.D. student in Computer Science at University of Illinois at Urbana-Champaign, advised by Professor Gang Wang. My research interests lie in security and machine learning.

I received my M.S. in Electrical Engineering and Computer Sciences from University of California, Berkeley in 2020, under the supervision of Professor Dawn Song at the Center for Long-Term Cybersecurity. I received my B.S. in Electrical Engineering and Computer Sciences from UC Berkeley in 2019. I had research internship at Berkeley Artificial Intelligence Research Lab and Alibaba DAMO Academy in 2018.


Education

University of Illinois at Urbana-Champaign, Urbana, IL.
Ph.D., Computer Science
2020 - 2025
University of California, Berkeley, Berkeley, CA.
Master of Science, Electrical Engineering and Computer Sciences
2019 - 2020
University of California, Berkeley, Berkeley, CA.
Bachelor of Science, Electrical Engineering and Computer Sciences
2016 - 2019
Duke University, Durham, NC.
Summer Session, Economics: Game Theory
2015 - 2015


Research

  • Center for Long-Term Cybersecurity, UC Berkeley, Berkeley, CA.
  • Graduate Researcher
    Jan. 2019 - Aug. 2020

(Began as an undergraduate research assistant) Supervised by Prof. Dawn Song and collaborated with postdoctoral researchers Min Du and Ruoxi Jia on research projects related to deep learning and security.

Lifelong anomaly detection through unlearning:
• Developed LSTM models to analyze system log files.
• Maintained a small memory set of labeled data to prevent catastrophic forgetting.
• Developed a process that is much easier and faster than retraining the system from scratch.
• The experiment results show a reduction of up to 77.3% false positives and up to 76.6% false negatives on real anomaly detection dataset (Paper presented in CCS'19).

Adversarial enhancement for community detection in networks:
• Designs multi-objective fitness function and auto-threshold to solve the resolution limit problem and achieve consensus partition.
• Evaluated on existing community detection algorithms and the improvement of performance was 10%-30%.
• Adversarial experiments show that proposed methods can achieve stronger defense against community detection deception (Paper presented in arXiv).

Time-aware gradient attack on dynamic network link prediction:
• Utilized the gradient information generated by DDNE across different snapshots to rewire a few links and consider the dynamic natures of real-world systems.
• Implemented TGA in two ways: one is based on traversal search and greedy search.
• Evaluated the data from real-world scenarios and the comprehensive experiments show the attack success rate has increased by 20%-40% using TGA. (Paper presented in arXiv).

NDSGD: A practical method to improve robustness of deep learning model on noisy dataset:
• Used noisy data clipping and group to reduce the influence of noisy data.
• Added robustness factors to reduce the oscillation of the loss curve and tune the hyper-parameters to learn optimal models.
• Evaluated the celebrated datasets and the performance surpassed the state-of-the-art.

  • Alibaba DAMO Academy, Hangzhou, China.
  • Research Intern
    Dec. 2018 - Jan. 2019

Participated in a project on database security, i.e., assisted in parsing unstructured, free-text log entries into structured representation and developing Long Short-Term Memory (LSTM) model for detection of abnormal conditions of database.

  • Berkeley Artificial Intelligence Research Lab, UC Berkeley, Berkeley, CA.
  • Research Assistant
    May 2018 - Nov. 2018

Collaborated with PhD student Xiangyu Yue (Advisor: Prof. Kurt Keutzer) on research projects related to deep learning.

Domain Adaptation for Road-object Segmentation:
• Developed a semantic-based scene method which enables to realize 3D-object segmentation from a point-wise label map, using a domain-adaptation training method to reduce the distribution gap between synthetic data and real data so as to enhance the performance of model.

Autonomous driving with SqueezeNet and CNN:
• Developed Convolutional Neural Network (CNN) models in TensorFlow to classify images.
• Conducted image segmentation on KITTI dataset and model training based on SqueezeNet and CNN, aiming to collect data from GTA-V (an action-adventure video game) and further using this dataset to train CNN model for autonomous driving.


Teaching


Publications&Preprints

  • Lifelong Anomaly Detection Through Unlearning
  • Min Du, Zhi Chen, Chang Liu, Rajvardhan Oak, Dawn Song
  • Proceedings of The 26th ACM Conference on Computer and Communications Security (CCS)
  • London, UK, November 2019.
  • PDF

Anomaly detection is essential towards ensuring system security and reliability. Powered by constantly generated system data, deep learning has been found both effective and flexible to use, with its ability to extract patterns without much domain knowledge. Existing anomaly detection research focuses on a scenario referred to as zero-positive, which means that the detection model is only trained for normal (i.e., negative) data. In a real application scenario, there may be additional manually inspected positive data provided after the system is deployed.We refer to this scenario as lifelong anomaly detection. However, we find that existing approaches are not easy to adopt such new knowledge to improve system performance. In this work, we are the first to explore the lifelong anomaly detection problem, and propose novel approaches to handle corresponding challenges. In particular, we propose a framework called unlearning, which can effectively correct the model when a false negative (or a false positive) is labeled. To this aim, we develop several novel techniques to tackle two challenges referred to as exploding loss and catastrophic forgetting. In addition, we abstract a theoretical framework based on generative models. Under this framework, our unlearning approach can be presented in a generic way to be applied to most zero-positive deep learning-based anomaly detection algorithms to turn them into corresponding lifelong anomaly detection solutions. We evaluate our approach using two state-of-the-art zero-positive deep learning anomaly detection architectures and three real-world tasks. The results show that the proposed approach is able to significantly reduce the number of false positives and false negatives through unlearning.
  • Adversarial Enhancement for Community Detection in Complex Networks
  • Jiajun Zhou, Zhi Chen, Min Du, Lihong Chen, Shanqing Yu, Feifei Li, Guanrong Chen, Qi Xuan
  • arXiv preprint arXiv:1911.01670
  • November 2019.
  • PDF

Community detection plays a significant role in network analysis. However, it also faces numerous challenges like adversarial attacks. How to further improve the performance and robustness of community detection for real-world networks has raised great concerns. In this paper, we propose a concept of adversarial enhancement for community detection, and present two adversarial enhancement algorithms: one is named adversarial enhancement via genetic algorithm (AE-GA), in which the modularity and the number of clusters are used to design a fitness function to solve the resolution limit problem; and the other is called adversarial enhancement via vertex similarity (AE-VS), integrating multiple information of community structures captured by diverse vertex similarities, which scales well on large-scale networks. The two algorithms are tested along with six existing community detection algorithms on four real-world networks. Comprehensive experimental results show that, by comparing with two traditional enhancement strategies, our methods help six community detection algorithms achieve more significant performance improvement. Moreover, experiments on the corresponding adversarial networks indicate that our methods can rebuild the network structure destroyed by adversarial attacks to certain extent, achieving stronger defense against community detection deception.
  • Time-aware Gradient Attack on Dynamic Network Link Prediction
  • Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Qi Xuan
  • arXiv preprint arXiv:1911.10561
  • November 2019.
  • PDF

In network link prediction, it is possible to hide a target link from being predicted with a small perturbation on network structure. This observation may be exploited in many real world scenarios, for example, to preserve privacy, or to exploit financial security. There have been many recent studies to generate adversarial examples to mislead deep learning models on graph data. However, none of the previous work has considered the dynamic nature of real-world systems. In this work, we present the first study of adversarial attack on dynamic network link prediction (DNLP). The proposed attack method, namely time-aware gradient attack (TGA), utilizes the gradient information generated by deep dynamic network embedding (DDNE) across different snapshots to rewire a few links, so as to make DDNE fail to predict target links. We implement TGA in two ways: one is based on traversal search, namely TGA-Tra; and the other is simplified with greedy search for efficiency, namely TGA-Gre. We conduct comprehensive experiments which show the outstanding performance of TGA in attacking DNLP algorithms.


Honors&Awards


Skills

Languages C/C++, Python, Java, SQL, Ruby, Ros
Frameworks Pytorch, Tensorflow, Rails, Cucumber