We are honored to invite leading experts from both academia and industry on Federated Learning to share their cutting-edge research and viewpoints. The talks will introduce the advances, challenges, and new technologies in federated learning.

Flags

Jian Pei

Duke University



Title: Data Valuation in Federated Learning

Abstract: To enable practical federated learning, we not only have to improve the efficiency but also address the incentive and fairness concerns. In this talk, I will delve into the related challenges and share our recent endeavors in valuation and personalization in federated learning. Particularly, valuation in federated learning seeks to allocate credits to participants in a just and equitable manner. Personalization in federated learning addresses the individual needs of participants. In the context of federated learning, those problems have some unique challenges. For example, the federated learning process is often run only once and many participants may not be consulted in every round. I will discuss the intuitions and ideas of our latest methods and also discuss some challenges and opportunities for future work.

Bio: Jian Pei is a Professor at Duke University and a renowned expert in the fields of data science, data mining, database systems, machine learning, and information retrieval. He has been recognized as a Fellow of the Royal Society of Canada, the Canadian Academy of Engineering, ACM, and IEEE for his contributions to the development of data science principles and techniques for data-driven and data-intensive applications. Jian Pei is a prolific author, publishing in premier academic venues, and his works have been cited over 100,000 times. He has received numerous awards, including the 2017 ACM SIGKDD Innovation Award, the 2015 ACM SIGKDD Service Award, and the 2014 IEEE ICDM Research Contributions Award. In addition to his research accomplishments, Jian Pei served as the chair of ACM SIGKDD and the Editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering.

Flags

Salman Avestimehr

USC & FedML



Title: FedLLM: Federated Training of Large Language Models on Private, Sensitive, and Siloed Data

Abstract: Large language models (LLMs) promise to revolutionize many products and services. So, it is not surprising that every tech enterprise (or even individuals) would now desire to have their own customized models. However, the data that is needed for customization (or fine-tuning) of such models is often spread across many silos (e.g., edge nodes, users’ devices, multi-clouds, etc.), and can’t be pooled due to privacy, security, regulations, as well as cloud costs. I will discuss how we ease this problem by providing FedLLM, which is a decentralized/federated machine learning ecosystem that enables collaborative training of large models across the edge and cloud. I will also highlight some of the key research challenges in this area, as well as recent progress towards them.

Bio: Salman Avestimehr (https://www.avestimehr.com) is the Dean’s Professor and the inaugural director of the USC-Amazon Center on Trustworthy AI (https://trustedai.usc.edu) at the ECE and CS Department of the University of Southern California. He is also the CEO and co-founder of FedML (https://fedml.ai). His research interests include decentralized and federated machine learning, information theory, security, and privacy. Dr. Avestimehr has received many awards for his research, including the Presidential PECASE award from the White House (President Obama), the James L. Massey Research & Teaching Award from IEEE Information Theory Society, an Information Theory Society and Communication Society Joint Paper Award, and several Best Paper Awards at Conferences. He is a fellow of the IEEE.

Flags

Yiran Chen

Duke University



Title: Revolutionizing Federated Learning: A Leap in Efficiency, Robustness, and Performance

Abstract: Federated learning (FL) has emerged as a promising approach for edge computing, but its efficiency is challenging due to device heterogeneity. To address this, asynchronous FL (AFL) and semi-asynchronous FL (SAFL) methods have been proposed. However, these methods suffer from poor accuracy and efficiency when dealing with non-IID data and highly diverse devices. In response, FedSEA is introduced as a semi-asynchronous FL framework that addresses accuracy drops by balancing aggregation frequency and predicting local update arrival. Another challenge in FL is the unexpected exit of passive parties in vertical federated learning (VFL), which leads to performance degradation and IP leakage. To mitigate these vulnerabilities, Party-wise Dropout and DIMIP defense methods are proposed. Client-wise data heterogeneity affects FL convergence, and the FedCor framework addresses this by modeling loss correlations between clients using Gaussian Process and reducing expected global loss. External covariate shift in FL is uncovered, demonstrating that normalization layers are crucial, and layer normalization proves effective. Additionally, class imbalance in FL degrades performance, but our proposed Federated Class-balanced Sampling (Fed-CBS) mechanism reduces this imbalance by employing homomorphic encryption for privacy preservation. Lastly, Federated Instruction Tuning (FedIT) is introduced, leveraging FL for instruction tuning of large language models, addressing challenges in acquiring diverse high-quality data and preserving privacy while improving generalizability. These advancements contribute to enhancing the efficiency, robustness, convergence rates, and performance of FL in various real-world scenarios.

Bio: Yiran Chen is the John Cocke Distinguished Professor of Electrical and Computer Engineering at Duke University and serving as the director of the NSF AI Institute for Edge Computing Leveraging the Next-generation Networks (Athena), the NSF Industry-University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of Duke Center for Computational Evolutionary Intelligence (DCEI). His group focuses on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. He is now serving as the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He received numerous awards for his technical contributions and professional services such as the IEEE CASS Charles A. Desoer Technical Achievement Award, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, etc. He has been the distinguished lecturer of IEEE CEDA and CAS. He is a Fellow of the AAAS, ACM, and IEEE, and now serves as the chair of ACM SIGDA.

Flags

Aidong Zhang

University of Virginia



Title: Multi-party Learning in Federated Environments

Abstract: A new era of collaborative learning is emerging as part of the next phase of ubiquitous computing, wherein researchers at different sites will work together to correlate the disparate data they have separately acquired and eventually create a sophisticated decision-making model. It is thus imperative to establish a platform to support collaborative, multi-party data analysis, through which the participating parties can share their data with each other with different degrees of privacy control via a centralized server and are able to compute with each other’s data, by either directly sharing their data with the server or not directly sharing their data but their model parameters with the server to collaboratively derive a solution with other parties. In this talk, I will introduce Bridge: our NSF funded research project on supporting scalable multi-party learning and data analysis in federated environments. Multi-party learning enables multiple parties to collaboratively train a statistical model on heterogeneous devices and servers with or without sharing their local data, which is specifically needed for privacy sensitive applications, such as healthcare and financial analysis.

Bio: Aidong Zhang is Thomas M. Linville Endowed Professor of Computer Science, with joint appointments at Data Science, and Biomedical Engineering at University of Virginia (UVA). Prof. Zhang’s research interests include machine learning, data science, bioinformatics and computational biology, and health informatics. Prof. Zhang is a fellow of ACM, IEEE, and AIMBE.

Flags

Carl Yang

Emory University



Title: Federated Learning with Graph Data for Healthcare

Abstract: Graphs are ubiquitous data structures providing powerful representations for objects with interactions. Empowered by recent progress in federated learning, rapid technical progress has been achieved in mining distributed graph datasets. On the other hand, research and clinical practices in public health have generated large volumes of separately owned graph data, where the exploration of federated learning is natural but limited. In this talk, I will introduce our research vision and agenda towards federated learning with graph data for healthcare, followed by examples of our recent progress on graph-level, subgraph-level, link-level and node-level federated learning frameworks. I will conclude the talk with discussions on future directions that can benefit from further collaborations with researchers interested in federated learning and health informatics in general.

Bio: Carl Yang is an Assistant Professor in Emory University. He received his Ph.D. in Computer Science at University of Illinois, Urbana-Champaign in 2020, and B.Eng. in Computer Science and Engineering at Zhejiang University in 2014. His research interests span graph data mining, applied machine learning, knowledge graphs and federated learning, with applications in recommender systems, social networks, neuroscience and healthcare. Carl’s research results have been published in 100+ peer-reviewed papers in top venues across data mining and health informatics. He is also a recipient of the Dissertation Completion Fellowship of UIUC in 2020, the Best Paper Award of ICDM in 2020, the Amazon Research Award in 2022, the Best Paper Award of KDD Health Day in 2022, the Best Paper Award of ML4H in 2022, the NIH K25 Award in 2023, and multiple Emory internal research awards.

Flags

Heiko Ludwig

IBM Research



Title: Maintaining Privacy and Secrecy in Federated Learning in the Enterprise

Abstract: Federated learning is starting to gain momentum in enterprise scenarios, mostly in the context of information silos or addressing regulatory constraints. Many of these emergent use cases contain a vertical FL or a hybrid horizontal and vertical components, providing most value-add to organizations teaming up to provide different types of data to a common model. This talk will introduce some scenarios and discuss different privacy techniques, such as the use of fully homomorphic encryption, that enable those types of scenarios. These techniques are applicable to both DNNs as well as classical model types such as graphs. We will also discuss the limits of their applicability and the opportunity for future research.

Bio: Heiko Ludwig is a Principal Research Scientist and Senior Manager of the AI Platforms department in IBM’s Almaden Research Center in San Jose, CA. Heiko is leading research work on computational platforms for AI, focusing on security, privacy, scaling and reliability of machine learning and inference. He leads IBM Research’s work on Distributed AI. Heiko has worked on various problems of distributed systems and artificial intelligence in his career, publishing more than 150 refereed articles and conference papers as well as a book on federated learning. He is an ACM Distinguished Engineer and a managing editor of the International Journal of Cooperative Information Systems. The results of his work contributes to various IBM lines of business and open source projects. Prior to the Almaden Research Center, Heiko held different positions at IBM Research in Switzerland, the US, Argentina and Brazil. He holds a Master’s (Diplom) degree and a PhD in information systems from Otto-Friedrich University Bamberg, Germany.

Flags

Tian Li

Carnegie Mellon University



Title: Tilted Losses in Machine Learning: Theory and Applications to Federated Learning

Abstract: Heterogeneity not only affects the convergence of federated learning (FL) models, but also poses challenges to a number of other critical constraints including fairness. In this talk, I first introduce a fair federated learning objective, q-Fair FL (q-FFL), to promote consistent quality of service for all clients in the network. Partly motivated by q-FFL and exponential tilting, I then focus on a more general framework to address limitations of empirical risk minimization via tilting, named tilted empirical risk minimization (TERM). I make connections between TERM and related approaches, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization, and present batch and stochastic first-order optimization methods for solving TERM at scale. Finally, I show that this approach can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance—delivering state-of-the-art performance relative to more complex, bespoke solutions for these problems.

Bio: Tian Li is a fifth-year Ph.D. student in the Computer Science Department at Carnegie Mellon University working with Virginia Smith. Her research interests are in distributed optimization, federated learning, and trustworthy ML. Prior to CMU, she received her undergraduate degrees in Computer Science and Economics from Peking University. She received the Best Paper Award at the ICLR Workshop on Security and Safety in Machine Learning Systems, was invited to participate in the EECS Rising Stars Workshop, and was recognized as a Rising Star in Machine Learning/Data Science by multiple institutions.

Flags

Ananda T. Suresh

Google Research



Title: Scaling Model Size in Cross-device Federated Learning

Abstract: Cross-device federated learning (FL) enables training a model on data distributed across typically millions of edge devices without data ever leaving the devices. Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks. In this talk, I will describe the system constraints in cross-device federated learning and explore the feasibility of training larger models in cross-device federated learning. I will first start with training mid-size neural models and then address the challenge of training larger transformer models in FL. I will conclude the talk with some future research directions, which might be of interest to researchers working on federated learning.

Bio: Ananda Theertha Suresh is a research scientist at Google Research, New York. He received his PhD from University of California San Diego, where he was advised by Prof. Alon Orlitsky. His research focuses on theoretical and algorithmic aspects of machine learning, information theory, privacy, and statistics. He is a recipient of the 2017 Paul Baran Maroni Young Scholar award and a co-recipient of best paper awards at NeurIPS 2015, ALT 2020, CCS 2021, and a best paper honorable mention award at ICML 2017.