Atnaujinkite slapukų nuostatas

El. knyga: Federated Learning: Theory and Practice

Edited by (Staff Research Scientist at IBM Research, Thomas J. Watson Research Center, Yorktown Heights, NY, USA), Edited by (Assistant Professor, Washington State University, USA), Edited by (Principal Research Scientist, IBM Thomas J. Watson Research Center, Yor)
  • Formatas: PDF+DRM
  • Išleidimo metai: 09-Feb-2024
  • Leidėjas: Academic Press Inc
  • Kalba: eng
  • ISBN-13: 9780443190384
  • Formatas: PDF+DRM
  • Išleidimo metai: 09-Feb-2024
  • Leidėjas: Academic Press Inc
  • Kalba: eng
  • ISBN-13: 9780443190384

DRM apribojimai

  • Kopijuoti:

    neleidžiama

  • Spausdinti:

    neleidžiama

  • El. knygos naudojimas:

    Skaitmeninių teisių valdymas (DRM)
    Leidykla pateikė šią knygą šifruota forma, o tai reiškia, kad norint ją atrakinti ir perskaityti reikia įdiegti nemokamą programinę įrangą. Norint skaityti šią el. knygą, turite susikurti Adobe ID . Daugiau informacijos  čia. El. knygą galima atsisiųsti į 6 įrenginius (vienas vartotojas su tuo pačiu Adobe ID).

    Reikalinga programinė įranga
    Norint skaityti šią el. knygą mobiliajame įrenginyje (telefone ar planšetiniame kompiuteryje), turite įdiegti šią nemokamą programėlę: PocketBook Reader (iOS / Android)

    Norint skaityti šią el. knygą asmeniniame arba „Mac“ kompiuteryje, Jums reikalinga  Adobe Digital Editions “ (tai nemokama programa, specialiai sukurta el. knygoms. Tai nėra tas pats, kas „Adobe Reader“, kurią tikriausiai jau turite savo kompiuteryje.)

    Negalite skaityti šios el. knygos naudodami „Amazon Kindle“.

Federated Learning: Theory and Practice provides a holistic treatment to federated learning, starting with a broad overview on federated learning as a distributed learning system with various forms of decentralized data and features. A detailed exposition then follows of core challenges and practical modeling techniques and solutions, spanning a variety of aspects in communication efficiency, theoretical convergence and security, viewed from different perspectives. Part II features emerging challenges stemming from many socially driven concerns of federated learning as a future public machine learning service, and Part III and IV present a wide array of industrial applications of federated learning, including potential venues and visions for federated learning in the near future. This book provides a comprehensive and accessible introduction to federated learning which is suitable for researchers and students in academia and industrial practitioners who seek to leverage the latest advances in machine learning for their entrepreneurial endeavors
PART I: Fundamentals and Key Challenges
1. Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
2. Considerations on the Theory of Training Models with Differential Privacy
3. Personalized Federated Learning: Theory and Practice
4. Privacy Preserving Federated Learning: Algorithms and Guarantees
5. Securing Federated Learning: Defending Against Poisoning and Evasion Attacks
6. Adversarial Robustness in Federated Learning
7. Evaluating Gradient Inversion Attacks and Defenses

PART II: Emerging Topics
8. Fairness in Federated Learning
9. Meta Federated Learning
10. Topology-Aware Federated Learning
11. Multi-Tier Federated Learning with Vertically and Horizontally Partitioned Data
12. Vertical Asynchronous Federated Learning
13. Hyperparameter Tuning for Federated Learning - Systems and Practices
14. Hyper-parameter Optimization for Federated Learning
15. Federated Sequential Decision-Making: Bayesian Optimization, Reinforcement Learning and Beyond
16. Data Valuation in Federated Learning

PART III: Applications
17. Incentives in Federated Learning
18. Introduction to Federated Quantum Machine Learning
19. Federated Quantum Natural Gradient Descent for Quantum Federated Learning
20. Mobile Computing Framework for Federated Learning
21. Federated Learning for Speech Recognition and Acoustic Processing

PART IV: Future Directions
22. Ethical Considerations and Legal Issues Relating to Federated Learning
Lam M. Nguyen is a Staff Research Scientist at IBM Research, Thomas J. Watson Research Center working in the intersection of Optimization and Machine Learning/Deep Learning. He is also the PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Nguyen received his B.S. degree in Applied Mathematics and Computer Science from Lomonosov Moscow State University in 2008; M.B.A. degree from McNeese State University in 2013; and Ph.D. degree in Industrial and Systems Engineering from Lehigh University in 2018. Dr. Nguyen has extensive research experience in optimization for machine learning problems. He has published his work mainly in top AI/ML and Optimization publication venues, including ICML, NeurIPS, ICLR, AAAI, AISTATS, Journal of Machine Learning Research, and Mathematical Programming. He has been serving as an Action/Associate Editor for Journal of Machine Learning Research, Machine Learning, Neural Networks, IEEE Transactions on Neural Networks and Learning Systems, and Journal of Optimization Theory and Applications; an Area Chair for ICML, NeurIPS, ICLR, AAAI, CVPR, UAI, and AISTATS conferences. His current research interests include design and analysis of learning algorithms, optimization for representation learning, dynamical systems for machine learning, federated learning, reinforcement learning, time series, and trustworthy/explainable AI.

Trong Nghia Hoang: Dr. Hoang received the Ph.D. in Computer Science from National University of Singapore (NUS) in 2015. From 2015 to 2017, he was a Research Fellow at NUS. After NUS, Dr. Hoang did another postdoc at MIT (2017-2018). From 2018-2020, he was a Research Staff Member and Principal Investigator at the MIT-IBM Watson AI Lab in Cambridge, Massachusetts. In Nov 2020, Dr. Hoang joined the AWS AI Labs of Amazon in Santa Clara, California as a senior research scientist. His research interests span the broad areas of deep generative modeling with applications to (personalized) federated learning, meta learning, black-box model fusion and/or reconfiguration. He has been publishing actively to key outlets in machine learning and AI such as ICML/NeurIPS/AAAI (among others). He has also been serving as a senior program committee member at AAAI, IJCAI and a program committee member of ICML, NeurIPS, ICLR, AISTATS. He also organized a recent NeurIPS-21 workshop in Federated Learning. Pin-Yu Chen: Dr. Pin-Yu Chen is a principal research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chens recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is to build trustworthy machine learning systems. He is a co-author of the book Adversarial Robustness for Machine Learning”. At IBM Research, he received several research accomplishment awards, including IBM Master Inventor, IBM Corporate Technical Award, and IBM Pat Goldberg Memorial Best Paper. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS22, AAAI(22,23), IJCAI21, CVPR(20,21,23), ECCV20, ICASSP(20,22,23), KDD19, and Big Data18, and organized several workshops for adversarial machine learning. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award.