Cantitate/Preț
Produs

Representation Learning: Propositionalization and Embeddings

Autor Nada Lavrač, Vid Podpečan, Marko Robnik-Šikonja
en Limba Engleză Paperback – 11 iul 2022
This monograph addresses advances in representation learning, a cutting-edge research area of machine learning. Representation learning refers to modern data transformation techniques that convert data of different modalities and complexity, including texts, graphs, and relations, into compact tabular representations, which effectively capture their semantic properties and relations. The monograph focuses on (i) propositionalization approaches, established in relational learning and inductive logic programming, and (ii) embedding approaches, which have gained popularity with recent advances in deep learning. The authors establish a unifying perspective on representation learning techniques developed in these various areas of modern data science, enabling the reader to understand the common underlying principles and to gain insight using selected examples and sample Python code. The monograph should be of interest to a wide audience, ranging from data scientists, machine learning researchers and students to developers, software engineers and industrial researchers interested in hands-on AI solutions.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 96135 lei  6-8 săpt.
  Springer International Publishing – 11 iul 2022 96135 lei  6-8 săpt.
Hardback (1) 96845 lei  6-8 săpt.
  Springer International Publishing – 11 iul 2021 96845 lei  6-8 săpt.

Preț: 96135 lei

Preț vechi: 120168 lei
-20% Nou

Puncte Express: 1442

Preț estimativ în valută:
18398 19111$ 15282£

Carte tipărită la comandă

Livrare economică 03-17 februarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783030688196
ISBN-10: 3030688194
Ilustrații: XVI, 163 p. 46 illus., 38 illus. in color.
Dimensiuni: 155 x 235 mm
Greutate: 0.26 kg
Ediția:1st ed. 2021
Editura: Springer International Publishing
Colecția Springer
Locul publicării:Cham, Switzerland

Cuprins

Introduction to Representation Learning.- Machine Learning Background.- Text Embeddings.- Propositionalization of Relational Data.- Graph and Heterogeneous Network Transformations.- Unified Representation Learning Approaches.- Many Faces of Representation Learning.

Notă biografică

Prof. Nada Lavrač (Jožef Stefan Institute, Slovenia) is Senior researcher at the Department of Knowledge Technologies at JSI (was Head of Department in 2014-2020), and Full Professor at University of Nova Gorica and International Postgraduate School Jožef Stefan (was Vice-Dean in 2016-2020). Her research interests are machine learning, data mining, text mining, knowledge management and computational creativity. She was chair of several conferences ICCC 2014, ILP 2012, AIME 2011, ..., co-chair of conferences including SOKD 2008-2010, ILP 2008, IDA 2007, DS 2006, ..., keynote speaker at KI2020, ADBIS2019, ISWC 2017, LPNMR 2015, JSMI 2014, … She is/was member of editorial boards of Artificial Intelligence in Medicine, AI Communications, New Generation Computing, Applied AI, Machine Learning Journal and Data Mining and Knowledge Discovery. She is ECCAI/EurAI Fellow, was vice-president of ECCAI (1996-98), and served as member of the International Machine Learning Society board and Artificial Intelligence in Medicine board.

Vid Podpečan, PhD, is a research associate at the Department of Knowledge Technologies at the Jožef Stefan Institute. He obtained his BSc in computer science from the University of Ljubljana in 2007, and his PhD from the Jožef Stefan International Postgraduate School in 2013. His research interests include machine learning, computational systems biology, text mining and natural language processing, and robotics. He co-authored a scientific monograph and published the results of his research in more than 50 scientific publications. He is also actively involved in promoting STEAM with a focus on robotics, programming, and art for which he received an award by the Slovene Science Foundation.

Prof Marko Robnik-Sikonja is Professor of Computer Science and Informatics at University of Ljubljana, Faculty of Computer and Information Science. His research interests span machine learning, data mining, natural languageprocessing, network analytics, and application of data science techniques. His most notable scientific results are from the areas of feature evaluation, ensemble learning, explainable artificial intelligence, data generation, and natural language analytics.  He is (co)author of over 150 scientific publications that were cited more than 5,000 times, and three open-source R data mining packages. He participates in several national and international projects, regularly serves as programme committees member of top artificial intelligence and machine learning conferences, and is an editorial board member of seven international journals.

Textul de pe ultima copertă

This monograph addresses advances in representation learning, a cutting-edge research area of machine learning. Representation learning refers to modern data transformation techniques that convert data of different modalities and complexity, including texts, graphs, and relations, into compact tabular representations, which effectively capture their semantic properties and relations. The monograph focuses on (i) propositionalization approaches, established in relational learning and inductive logic programming, and (ii) embedding approaches, which have gained popularity with recent advances in deep learning. The authors establish a unifying perspective on representation learning techniques developed in these various areas of modern data science, enabling the reader to understand the common underlying principles and to gain insight using selected examples and sample Python code. The monograph should be of interest to a wide audience, ranging from data scientists, machine learning researchers and students to developers, software engineers and industrial researchers interested in hands-on AI solutions.

Caracteristici

Representation learning for cutting-edge machine learning – the benefit is a unifying approach to data fusion and transformation into compact tabular format used in standard learners and modern deep neural classifiers Coverage of tables, relations, texts, networks and ontologies – the benefit is a unified approach to handling heterogeneous data, enabling data scientists to step out of their isolated machine learning silos used in their routine practice Open science approach with hands-on examples – the benefit is the methodology and code reuse, as well as replicability with demo use cases