Evolutionary Learning: Advances in Theories and Algorithms
Autor Zhi-Hua Zhou, Yang Yu, Chao Qianen Limba Engleză Hardback – 3 iun 2019
Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
Preț: 904.83 lei
Preț vechi: 1131.04 lei
-20% Nou
Puncte Express: 1357
Preț estimativ în valută:
173.17€ • 182.69$ • 144.31£
173.17€ • 182.69$ • 144.31£
Carte tipărită la comandă
Livrare economică 02-16 ianuarie 25
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9789811359552
ISBN-10: 9811359555
Pagini: 361
Ilustrații: XII, 361 p. 59 illus., 20 illus. in color.
Dimensiuni: 155 x 235 x 27 mm
Greutate: 0.7 kg
Ediția:1st ed. 2019
Editura: Springer Nature Singapore
Colecția Springer
Locul publicării:Singapore, Singapore
ISBN-10: 9811359555
Pagini: 361
Ilustrații: XII, 361 p. 59 illus., 20 illus. in color.
Dimensiuni: 155 x 235 x 27 mm
Greutate: 0.7 kg
Ediția:1st ed. 2019
Editura: Springer Nature Singapore
Colecția Springer
Locul publicării:Singapore, Singapore
Cuprins
1.Introduction.- 2. Preliminaries.- 3. Running Time Analysis: Convergence-based Analysis.- 4. Running Time Analysis: Switch Analysis.- 5. Running Time Analysis: Comparison and Unification.- 6. Approximation Analysis: SEIP.- 7. Boundary Problems of EAs.- 8. Recombination.- 9. Representation.- 10. Inaccurate Fitness Evaluation.- 11. Population.- 12. Constrained Optimization.- 13. Selective Ensemble.- 14. Subset Selection.- 15. Subset Selection: k-Submodular Maximization.- 16. Subset Selection: Ratio Minimization.- 17. Subset Selection: Noise.- 18. Subset Selection: Acceleration.
Recenzii
“The book is clearly and nicely written and is recommended for everyone interested in the new development in evolutionary learning.” (Andreas Wichert, zbMATH 1426.68004, 2020)
Notă biografică
Zhi-Hua Zhou is a Professor, founding director of the LAMDA Group, Head of the Department of Computer Science and Technology of Nanjing University, China. He authored the books "Ensemble Methods: Foundations and Algorithms" (2012) and "Machine Learning" (in Chinese, 2016), and published many papers in top venues in artificial intelligence and machine learning. His H-index is 89 according to Google Scholar. He founded ACML (Asian Conference on Machine Learning), and served as chairs for many prestigious conferences such as AAAI 2019 program chair, ICDM 2016 general chair, etc., and served as action/associate editor for prestigious journals such as PAMI, Machine Learning journal, etc. He is a Fellow of the ACM, AAAI, AAAS, IEEE and IAPR.
Yang Yu is an associate Professor of Nanjing University, China. His research interests are in artificial intelligence, including reinforcement learning, machine learning, and derivative-free optimization. He wasrecognized in “AI’s 10 to Watch” by IEEE Intelligent Systems 2018, and received several awards/honors including the PAKDD Early Career Award, IJCAI’18 Early Career Spotlight talk, National Outstanding Doctoral Dissertation Award, China Computer Federation Outstanding Doctoral Dissertation Award, PAKDD’08 Best Paper Award, GECCO’11 Best Paper (Theory Track), etc. He is a Junior Associate Editor of Frontiers of Computer Science, and an Area Chair of ACML’17, IJCAI’18, and ICPR’18.
Chao Qian is an associate Researcher of University of Science and Technology of China, China. His research interests are in artificial intelligence, evolutionary computation and machine learning. He has published over 20 papers in leading international journals and conference proceedings, including Artificial Intelligence, Evolutionary Computation, IEEE Transactions on Evolutionary Computation, Algorithmica, NIPS, IJCAI, AAAI, etc. He has won the ACM GECCO 2011 Best Paper Award (Theory Track) and the IDEAL 2016 Best Paper Award. He has also been chair of IEEE Computational Intelligence Society (CIS) Task Force "Theoretical Foundations of Bio-inspired Computation".
Yang Yu is an associate Professor of Nanjing University, China. His research interests are in artificial intelligence, including reinforcement learning, machine learning, and derivative-free optimization. He wasrecognized in “AI’s 10 to Watch” by IEEE Intelligent Systems 2018, and received several awards/honors including the PAKDD Early Career Award, IJCAI’18 Early Career Spotlight talk, National Outstanding Doctoral Dissertation Award, China Computer Federation Outstanding Doctoral Dissertation Award, PAKDD’08 Best Paper Award, GECCO’11 Best Paper (Theory Track), etc. He is a Junior Associate Editor of Frontiers of Computer Science, and an Area Chair of ACML’17, IJCAI’18, and ICPR’18.
Chao Qian is an associate Researcher of University of Science and Technology of China, China. His research interests are in artificial intelligence, evolutionary computation and machine learning. He has published over 20 papers in leading international journals and conference proceedings, including Artificial Intelligence, Evolutionary Computation, IEEE Transactions on Evolutionary Computation, Algorithmica, NIPS, IJCAI, AAAI, etc. He has won the ACM GECCO 2011 Best Paper Award (Theory Track) and the IDEAL 2016 Best Paper Award. He has also been chair of IEEE Computational Intelligence Society (CIS) Task Force "Theoretical Foundations of Bio-inspired Computation".
Textul de pe ultima copertă
Many machine learning tasks involve solving complex optimization problems, such as working on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can prove difficult to even define an explicit objective function. Evolutionary learning applies evolutionary algorithms to address optimization problems in machine learning, and has yielded encouraging outcomes in many applications. However, due to the heuristic nature of evolutionary optimization, most outcomes to date have been empirical and lack theoretical support. This shortcoming has kept evolutionary learning from being well received in the machine learning community, which favors solid theoretical approaches.
Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
Caracteristici
Presents theoretical results for evolutionary learning Provides general theoretical tools for analysing evolutionary algorithms Proposes evolutionary learning algorithms with provable theoretical guarantees