Aggregation for Privately Trained Different Types of Local Models




Secure aggregation, Generative adversarial network, Differential privacy


Machine learning has been a thriving topic in recent years, with many practical applications and active research aspects. In machine learning, model aggregation is an important area. The idea of model aggregation is to aggregate a global model from trained local models. However, traditional aggregation methods based on parameter averaging can not aggregate models which have different types and structures. Because parameter averaging will fail to average different types of values (parameters). To address this problem, we propose a new aggregation method which will suit for different types of local models. To achieve our goal, we transfer knowledge from local models to the global model. To do so, firstly, we propose differentially private GANs, let local parties generate synthetic data related to their training data. Secondly, we use the majority of prediction votes from local models to label those synthetic samples. Finally, use the labelled synthetic datato train the global model. By combining synthetic data and labels from local models, knowledge can be transferred from local models to the global model. We evaluate our scheme on Adult, MNIST and Fashion MNIST datasets under different settings, experimental results show that our scheme can achieve an accurate global model with low privacy loss. Besides, the easily implemented building blocks make our scheme efficient and practical for applications.




How to Cite

Han, C., & Xue, R. (2020). Aggregation for Privately Trained Different Types of Local Models. EAI Endorsed Transactions on Security and Safety, 7(26), e2.