Federated Learning and Knowledge Distillation with
Department of Electrical & Computer Engineering
University of Alberta, Edmonton, Canada
With the rapid progress encountered in data analytics, we have been witnessing important challenges. The visible and pressing requirements are inherently associated with the data and a way they are addressed in system modeling. In the landscape of data analytics, we identify three ongoing quests with far-reaching methodological implications, namely (i)modeling in the presence of existing constraints of privacy and security, (ii) efficient model building with limited data of varying quality, and (iii) deployment of advanced and computationally demanding models on computing platforms of limited computing resources. To address these challenges, federated learning and knowledge distillation have emerged as conceptual and algorithmic sound directions.
In the talk, we demonstrate how various ways of conceptualization of information granules as fuzzy sets, sets, rough sets, and others may lead to innovative augmentations of the above stated paradigms leading to interesting and efficient solutions. It is also advocated that Granular Computing enriches and augments the principles of federated learning and knowledge distillation.
To establish a sound conceptual modeling setting, we include a brief discussion of information granules-oriented design of rule-based architectures. A way of forming the rules through unsupervised federated learning is discussed along with algorithmic developments. A granular characterization of the model formed by the server vis-a-vis data located at individual clients is presented. It is demonstrated that the quality of the rules at the client’s end is described in terms of granular parameters and subsequently the global model becomes represented as a granular model. The roles of granular augmentations of models in the realm of logic-oriented knowledge distillation are discussed.