Transparency and Interpretability for Learned Representations of Artificial Neural Networks
Autor Richard Meyesen Limba Engleză Paperback – 28 noi 2022
Preț: 518.13 lei
Preț vechi: 647.66 lei
-20% Nou
Puncte Express: 777
Preț estimativ în valută:
99.16€ • 103.00$ • 82.37£
99.16€ • 103.00$ • 82.37£
Carte disponibilă
Livrare economică 13-27 ianuarie 25
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9783658400033
ISBN-10: 365840003X
Pagini: 211
Ilustrații: XXI, 211 p. 73 illus., 70 illus. in color. Textbook for German language market.
Dimensiuni: 148 x 210 mm
Greutate: 0.34 kg
Ediția:1st ed. 2022
Editura: Springer Fachmedien Wiesbaden
Colecția Springer Vieweg
Locul publicării:Wiesbaden, Germany
ISBN-10: 365840003X
Pagini: 211
Ilustrații: XXI, 211 p. 73 illus., 70 illus. in color. Textbook for German language market.
Dimensiuni: 148 x 210 mm
Greutate: 0.34 kg
Ediția:1st ed. 2022
Editura: Springer Fachmedien Wiesbaden
Colecția Springer Vieweg
Locul publicării:Wiesbaden, Germany
Cuprins
Introduction.- Background & Foundations.- Methods and Terminology.- Related Work.- Research Studies.- Transfer Studies.- Critical Reflection & Outlook.- Summary.
Notă biografică
Richard Meyes is head of the research group “Interpretable Learning Models” at the institute of Technologies and Management of Digital Transformation at the University of Wuppertal. His current research focusses on transparency and interpretability of decision-making processes of artificial neural networks.
Textul de pe ultima copertă
Artificial intelligence (AI) is a concept, whose meaning and perception has changed considerably over the last decades. Starting off with individual and purely theoretical research efforts in the 1950s, AI has grown into a fully developed research field of modern times and may arguably emerge as one of the most important technological advancements of mankind. Despite these rapid technological advancements, some key questions revolving around the matter of transparency, interpretability and explainability of an AI’s decision-making remain unanswered. Thus, a young research field coined with the general term Explainable AI (XAI) has emerged from increasingly strict requirements for AI to be used in safety critical or ethically sensitive domains. An important research branch of XAI is to develop methods that help to facilitate a deeper understanding for the learned knowledge of artificial neural systems. In this book, a series of scientific studies are presented that shed light onhow to adopt an empirical neuroscience inspired approach to investigate a neural network’s learned representation in the same spirit as neuroscientific studies of the brain.
About the author
Richard Meyes is head of the research group “Interpretable Learning Models” at the institute of Technologies and Management of Digital Transformation at the University of Wuppertal. His current research focusses on transparency and interpretability of decision-making processes of artificial neural networks.
Richard Meyes is head of the research group “Interpretable Learning Models” at the institute of Technologies and Management of Digital Transformation at the University of Wuppertal. His current research focusses on transparency and interpretability of decision-making processes of artificial neural networks.