Atnaujinkite slapukų nuostatas

El. knyga: Explainable Agency in Artificial Intelligence: Research and Practice

Edited by , Edited by

DRM apribojimai

  • Kopijuoti:

    neleidžiama

  • Spausdinti:

    neleidžiama

  • El. knygos naudojimas:

    Skaitmeninių teisių valdymas (DRM)
    Leidykla pateikė šią knygą šifruota forma, o tai reiškia, kad norint ją atrakinti ir perskaityti reikia įdiegti nemokamą programinę įrangą. Norint skaityti šią el. knygą, turite susikurti Adobe ID . Daugiau informacijos  čia. El. knygą galima atsisiųsti į 6 įrenginius (vienas vartotojas su tuo pačiu Adobe ID).

    Reikalinga programinė įranga
    Norint skaityti šią el. knygą mobiliajame įrenginyje (telefone ar planšetiniame kompiuteryje), turite įdiegti šią nemokamą programėlę: PocketBook Reader (iOS / Android)

    Norint skaityti šią el. knygą asmeniniame arba „Mac“ kompiuteryje, Jums reikalinga  Adobe Digital Editions “ (tai nemokama programa, specialiai sukurta el. knygoms. Tai nėra tas pats, kas „Adobe Reader“, kurią tikriausiai jau turite savo kompiuteryje.)

    Negalite skaityti šios el. knygos naudodami „Amazon Kindle“.

This book focuses on a subtopic of Explainable AI (XAI) called Explainable Agency (EA), which involves producing records of decisions made during an agent's reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from Interpretable Machine Learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users) where the explanations provided by EA agents are best evaluated in the context of human subject studies.

The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.

? Contributes to the topic of Explainable Artificial Intelligence (XAI)

? Focuses on the XAI subtopic of Explainable Agency

? Includes an introductory chapter, a survey, and five other original contributions



The book is a collection of cutting-edge research on the topic of explainable agency in artificial intelligence (XAI), including counterfactuals, fairness, human evaluations, and iterative and active communication among agents.
1. Introduction1.1. Definition of Explainable Agency1.2. Historical Perspective1.3. Motivation2. Interactivity and Explainability2.1. Position papers by interactivity panel3. Causality and Explainability3.1. Position papers by causality panel4. Counterfactual Explanations5. Explainable Agents and Multiagent Systems6. Fairness and Accountability in Explainable Systems7. User-centered Approaches8. Discussion and Future Directions
Dr. Silvia Tulli is an Assistant Professor at Sorbonne University. She received her Marie Curie ITN research fellowship and completed her Ph.D. at Instituto Superior Técnico. Her research interests lie at the intersection of explainable AI, interactive machine learning, and reinforcement learning.

Dr. David W. Aha (UC Irvine, 1990) serves as the Director of the AI Center at the Naval Research Laboratory in Washington, DC. His research interests include goal reasoning agents, deliberative autonomy, case-based reasoning, explainable AI, machine learning (ML), reproducible studies, and related topics.