Scaling up Machine Learning: Parallel and Distributed Approaches
Editat de Ron Bekkerman, Mikhail Bilenko, John Langforden Limba Engleză Hardback – 29 dec 2011
Toate formatele și edițiile | Preț | Express |
---|---|---|
Paperback (1) | 302.46 lei 3-5 săpt. | +32.00 lei 7-13 zile |
Cambridge University Press – 28 mar 2018 | 302.46 lei 3-5 săpt. | +32.00 lei 7-13 zile |
Hardback (1) | 648.30 lei 6-8 săpt. | |
Cambridge University Press – 29 dec 2011 | 648.30 lei 6-8 săpt. |
Preț: 648.30 lei
Preț vechi: 810.38 lei
-20% Nou
Puncte Express: 972
Preț estimativ în valută:
124.07€ • 130.49$ • 103.35£
124.07€ • 130.49$ • 103.35£
Carte tipărită la comandă
Livrare economică 04-18 ianuarie 25
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9780521192248
ISBN-10: 0521192242
Pagini: 492
Ilustrații: 144 b/w illus.
Dimensiuni: 185 x 259 x 33 mm
Greutate: 1.09 kg
Editura: Cambridge University Press
Colecția Cambridge University Press
Locul publicării:New York, United States
ISBN-10: 0521192242
Pagini: 492
Ilustrații: 144 b/w illus.
Dimensiuni: 185 x 259 x 33 mm
Greutate: 1.09 kg
Editura: Cambridge University Press
Colecția Cambridge University Press
Locul publicării:New York, United States
Cuprins
1. Scaling up machine learning: introduction Ron Bekkerman, Mikhail Bilenko and John Langford; Part I. Frameworks for Scaling Up Machine Learning: 2. Mapreduce and its application to massively parallel learning of decision tree ensembles Biswanath Panda, Joshua S. Herbach, Sugato Basu and Roberto J. Bayardo; 3. Large-scale machine learning using DryadLINQ Mihai Budiu, Dennis Fetterly, Michael Isard, Frank McSherry and Yuan Yu; 4. IBM parallel machine learning toolbox Edwin Pednault, Elad Yom-Tov and Amol Ghoting; 5. Uniformly fine-grained data parallel computing for machine learning algorithms Meichun Hsu, Ren Wu and Bin Zhang; Part II. Supervised and Unsupervised Learning Algorithms: 6. PSVM: parallel support vector machines with incomplete Cholesky Factorization Edward Chang, Hongjie Bai, Kaihua Zhu, Hao Wang, Jian Li and Zhihuan Qiu; 7. Massive SVM parallelization using hardware accelerators Igor Durdanovic, Eric Cosatto, Hans Peter Graf, Srihari Cadambi, Venkata Jakkula, Srimat Chakradhar and Abhinandan Majumdar; 8. Large-scale learning to rank using boosted decision trees Krysta M. Svore and Christopher J. C. Burges; 9. The transform regression algorithm Ramesh Natarajan and Edwin Pednault; 10. Parallel belief propagation in factor graphs Joseph Gonzalez, Yucheng Low and Carlos Guestrin; 11. Distributed Gibbs sampling for latent variable models Arthur Asuncion, Padhraic Smyth, Max Welling, David Newman, Ian Porteous and Scott Triglia; 12. Large-scale spectral clustering with Mapreduce and MPI Wen-Yen Chen, Yangqiu Song, Hongjie Bai, Chih-Jen Lin and Edward Y. Chang; 13. Parallelizing information-theoretic clustering methods Ron Bekkerman and Martin Scholz; Part III. Alternative Learning Settings: 14. Parallel online learning Daniel Hsu, Nikos Karampatziakis, John Langford and Alex J. Smola; 15. Parallel graph-based semi-supervised learning Jeff Bilmes and Amarnag Subramanya; 16. Distributed transfer learning via cooperative matrix factorization Evan Xiang, Nathan Liu and Qiang Yang; 17. Parallel large-scale feature selection Jeremy Kubica, Sameer Singh and Daria Sorokina; Part IV. Applications: 18. Large-scale learning for vision with GPUS Adam Coates, Rajat Raina and Andrew Y. Ng; 19. Large-scale FPGA-based convolutional networks Clement Farabet, Yann LeCun, Koray Kavukcuoglu, Berin Martini, Polina Akselrod, Selcuk Talay and Eugenio Culurciello; 20. Mining tree structured data on multicore systems Shirish Tatikonda and Srinivasan Parthasarathy; 21. Scalable parallelization of automatic speech recognition Jike Chong, Ekaterina Gonina, Kisun You and Kurt Keutzer.
Recenzii
'One of the landmark achievements of our time is the ability to extract value from large volumes of data. Engineering and algorithmic developments on this front have gelled substantially in recent years, and are quickly being reduced to practice in widely available, reusable forms. This book provides a broad and timely snapshot of the state of developments in scalable machine learning, which should be of interest to anyone who wishes to understand and extend the state of the art in analyzing data.' Joseph M. Hellerstein, University of California, Berkeley
'This is a book that every machine learning practitioner should keep in their library.' Yoram Singer, Google Inc.
'The contributions in this book run the gamut from frameworks for large-scale learning to parallel algorithms to applications, and contributors include many of the top people in this burgeoning subfield. Overall this book is an invaluable resource for anyone interested in the problem of learning from and working with big datasets.' William W. Cohen, Carnegie Mellon University, Pennsylvania
'This unique, timely book provides a 360 degrees view and understanding of both conceptual and practical issues that arise when implementing leading machine learning algorithms on a wide range of parallel and high-performance computing platforms. It will serve as an indispensable handbook for the practitioner of large-scale data analytics and a guide to dealing with BIG data and making sound choices for efficient applying learning algorithms to them. It can also serve as the basis for an attractive graduate course on parallel/distributed machine learning and data mining.' Joydeep Ghosh, University of Texas
'This is a book that every machine learning practitioner should keep in their library.' Yoram Singer, Google Inc.
'The contributions in this book run the gamut from frameworks for large-scale learning to parallel algorithms to applications, and contributors include many of the top people in this burgeoning subfield. Overall this book is an invaluable resource for anyone interested in the problem of learning from and working with big datasets.' William W. Cohen, Carnegie Mellon University, Pennsylvania
'This unique, timely book provides a 360 degrees view and understanding of both conceptual and practical issues that arise when implementing leading machine learning algorithms on a wide range of parallel and high-performance computing platforms. It will serve as an indispensable handbook for the practitioner of large-scale data analytics and a guide to dealing with BIG data and making sound choices for efficient applying learning algorithms to them. It can also serve as the basis for an attractive graduate course on parallel/distributed machine learning and data mining.' Joydeep Ghosh, University of Texas
Descriere
This integrated collection covers a range of parallelization platforms, concurrent programming frameworks and machine learning settings, with case studies.