Differential Privacy

Differential privacy is the de facto standard privacy notion for privacy-preserving data analysis and publishing. Roughly speaking, differential privacy means the output distributions from an analysis will be essentially the same, regardless whether any individual joins, or refrains from joining, the dataset. This is usually achieved by adding a pre-determined amount of randomness, or “noise”, into a computation performed on a dataset. The noise makes the output insensitive to any single record in the dataset, so that every individual in the dataset can plausibly deny that their data was included.

My research on differential privacy focuses on developing novel mechanisms with differential privacy guarantee, while striking a balance among privacy, utility and efficiency.

Related publications

  1. SOUPS
    New Differential Privacy Communication Pipeline and Design Framework (Poster)
    Jingyu Jia, Zikai Wen, Zheli Liu, and Changyu Dong
    In 18th Symposium on Usable Privacy and Security (SOUPS 2022), 2022
  2. TPDS
    Differentially Private Byzantine-robust Federated Learning
    Xu Ma, Xiaoqian Sun, Yuduo Wu, Zheli Liu, Xiaofeng Chen, and Changyu Dong
    IEEE Trans. Parallel Distributed Syst., 2022
  3. ICDM
    Differentially Private String Sanitization for Frequency-Based Mining Tasks
    Huiping Chen,  Changyu Dong, Liyue Fan, Grigorios Loukides, Solon P. Pissis, and Leen Stougie
    In IEEE International Conference on Data Mining, 2021
  4. USENIX
    How to Make Private Distributed Cardinality Estimation Practical, and Get Differential Privacy for Free
    Changhui Hu, Jin Li, Zheli Liu, Xiaojie Guo, Yu Wei, Xuan Guang, Grigorios Loukides, and Changyu Dong
    In 30th USENIX Security Symposium, 2021