Statistical Foundations of Data Science: Chapman & Hall/CRC Data Science Series
Autor Jianqing Fan, Runze Li, Cun-Hui Zhang, Hui Zouen Limba Engleză Hardback – 17 aug 2020
The book begins with an introduction to the stylized features of big data and their impacts on statistical analysis. It then introduces multiple linear regression and expands the techniques of model building via nonparametric regression and kernel tricks. It provides a comprehensive account on sparsity explorations and model selections for multiple regression, generalized linear models, quantile regression, robust regression, hazards regression, among others. High-dimensional inference is also thoroughly addressed and so is feature screening. The book also provides a comprehensive account on high-dimensional covariance estimation, learning latent factors and hidden structures, as well as their applications to statistical estimation, inference, prediction and machine learning problems. It also introduces thoroughly statistical machine learning theory and methods for classification, clustering, and prediction. These include CART, random forests, boosting, support vector machines, clustering algorithms, sparse PCA, and deep learning.
Din seria Chapman & Hall/CRC Data Science Series
- 8% Preț: 404.37 lei
- Preț: 386.75 lei
- 20% Preț: 246.86 lei
- 8% Preț: 389.00 lei
- 20% Preț: 289.32 lei
- 9% Preț: 594.74 lei
- 20% Preț: 285.16 lei
- Preț: 389.70 lei
- Preț: 389.66 lei
- 8% Preț: 404.37 lei
- 20% Preț: 516.00 lei
- 20% Preț: 305.38 lei
- 20% Preț: 283.90 lei
- Preț: 326.56 lei
- 20% Preț: 520.89 lei
- Preț: 364.40 lei
- 20% Preț: 366.96 lei
- 8% Preț: 393.86 lei
- 20% Preț: 324.37 lei
- 8% Preț: 548.14 lei
- 20% Preț: 397.75 lei
- 20% Preț: 332.19 lei
- 8% Preț: 428.25 lei
- 20% Preț: 390.16 lei
- 20% Preț: 363.35 lei
- 20% Preț: 380.93 lei
- 20% Preț: 400.41 lei
- 20% Preț: 363.44 lei
- 20% Preț: 361.44 lei
- 20% Preț: 447.24 lei
- 20% Preț: 382.42 lei
- 8% Preț: 411.42 lei
- 30% Preț: 444.19 lei
- 8% Preț: 394.44 lei
- 20% Preț: 357.74 lei
- 8% Preț: 544.47 lei
- 31% Preț: 650.51 lei
Preț: 771.25 lei
Preț vechi: 847.52 lei
-9% Nou
Puncte Express: 1157
Preț estimativ în valută:
147.60€ • 153.32$ • 122.60£
147.60€ • 153.32$ • 122.60£
Carte disponibilă
Livrare economică 13-27 ianuarie 25
Livrare express 27 decembrie 24 - 02 ianuarie 25 pentru 58.83 lei
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781466510845
ISBN-10: 1466510846
Pagini: 774
Ilustrații: 100 Illustrations, black and white
Dimensiuni: 156 x 234 x 48 mm
Greutate: 1.26 kg
Ediția:1
Editura: CRC Press
Colecția Chapman and Hall/CRC
Seria Chapman & Hall/CRC Data Science Series
ISBN-10: 1466510846
Pagini: 774
Ilustrații: 100 Illustrations, black and white
Dimensiuni: 156 x 234 x 48 mm
Greutate: 1.26 kg
Ediția:1
Editura: CRC Press
Colecția Chapman and Hall/CRC
Seria Chapman & Hall/CRC Data Science Series
Cuprins
1. Introduction. 2. Multiple and Nonparametric Regression. 3. Introduction to Penalized Least-Squares. 4. Penalized Least Squares: Properties. 5. Generalized Linear Models and Penalized Likelihood. 6. Penalized M-estimators. 7. High Dimensional Inference 8. Feature Screening. 9. Covariance Regularization and Graphical Models. 10. Covariance Learning and Factor Models. 11. Applications of Factor Models and PCA. 12. Supervised Learning. 13. Unsupervised Learning. 14. An Introduction to Deep Learning.
Notă biografică
The authors are international authorities and leaders on the presented topics. All are fellows of the Institute of Mathematical Statistics and the American Statistical Association.
Jianqing Fan is Frederick L. Moore Professor, Princeton University. He is co-editing Journal of Business and Economics Statistics and was the co-editor of The Annals of Statistics, Probability Theory and Related Fields, and Journal of Econometrics and has been recognized by the 2000 COPSS Presidents' Award, AAAS Fellow, Guggenheim Fellow, Guy medal in silver, Noether Senior Scholar Award, and Academician of Academia Sinica.
Runze Li is Elberly family chair professor and AAAS fellow, Pennsylvania State University, and was co-editor of The Annals of Statistics.
Cun-Hui Zhang is distinguished professor, Rutgers University and was co-editor of Statistical Science.
Hui Zou is professor, University of Minnesota and was action editor of Journal of Machine Learning Research.
Jianqing Fan is Frederick L. Moore Professor, Princeton University. He is co-editing Journal of Business and Economics Statistics and was the co-editor of The Annals of Statistics, Probability Theory and Related Fields, and Journal of Econometrics and has been recognized by the 2000 COPSS Presidents' Award, AAAS Fellow, Guggenheim Fellow, Guy medal in silver, Noether Senior Scholar Award, and Academician of Academia Sinica.
Runze Li is Elberly family chair professor and AAAS fellow, Pennsylvania State University, and was co-editor of The Annals of Statistics.
Cun-Hui Zhang is distinguished professor, Rutgers University and was co-editor of Statistical Science.
Hui Zou is professor, University of Minnesota and was action editor of Journal of Machine Learning Research.
Recenzii
"This book delivers a very comprehensive summary of the development of statistical foundations of data science. The authors no doubt are doing frontier research and have made several crucial contributions to the field. Therefore, the book offers a very good account of the most cutting-edge development. The book is suitable for both master and Ph.D. students in statistics, and also for researchers in both applied and theoretical data science. Researchers can take this book as an index of topics, as it summarizes in brief many significant research articles in an accessible way. Each chapter can be read independently by experienced researchers. It provides a nice cover of key concepts in those topics and researchers can benefit from reading the specific chapters and paragraphs to get a big picture rather than diving into many technical articles. There are altogether 14 chapters. It can serve as a textbook for two semesters. The book also provides handy codes and data sets, which is a great treasure for practitioners."
~Journal of Time Series Analysis
"This text—collaboratively authored by renowned statisticians Fan (Princeton Univ.), Li (Pennsylvania State Univ.), Zhang (Rutgers Univ.), and Zhou (Univ. of Minnesota)—laboriously compiles and explains theoretical and methodological achievements in data science and big data analytics. Amid today's flood of coding-based cookbooks for data science, this book is a rare monograph addressing recent advances in mathematical and statistical principles and the methods behind regularized regression, analysis of high-dimensional data, and machine learning. The pinnacle achievement of the book is its comprehensive exploration of sparsity for model selection in statistical regression, considering models such as generalized linear regression, penalized least squares, quantile and robust regression, and survival regression. The authors discuss sparsity not only in terms of various types of penalties but also as an important feature of numerical optimization algorithms, now used in manifold applications including deep learning. The text extensively probes contemporary high-dimensional data modeling methods such as feature screening, covariate regularization, graphical modeling, and principal component and factor analysis. The authors conclude by introducing contemporary statistical machine learning, spanning a range of topics in supervised and unsupervised learning techniques and deep learning. This book is a must-have bookshelf item for those with a thirst for learning about the theoretical rigor of data science."
~Choice Review, S-T. Kim, North Carolina A&T State University, August 2021
~Journal of Time Series Analysis
"This text—collaboratively authored by renowned statisticians Fan (Princeton Univ.), Li (Pennsylvania State Univ.), Zhang (Rutgers Univ.), and Zhou (Univ. of Minnesota)—laboriously compiles and explains theoretical and methodological achievements in data science and big data analytics. Amid today's flood of coding-based cookbooks for data science, this book is a rare monograph addressing recent advances in mathematical and statistical principles and the methods behind regularized regression, analysis of high-dimensional data, and machine learning. The pinnacle achievement of the book is its comprehensive exploration of sparsity for model selection in statistical regression, considering models such as generalized linear regression, penalized least squares, quantile and robust regression, and survival regression. The authors discuss sparsity not only in terms of various types of penalties but also as an important feature of numerical optimization algorithms, now used in manifold applications including deep learning. The text extensively probes contemporary high-dimensional data modeling methods such as feature screening, covariate regularization, graphical modeling, and principal component and factor analysis. The authors conclude by introducing contemporary statistical machine learning, spanning a range of topics in supervised and unsupervised learning techniques and deep learning. This book is a must-have bookshelf item for those with a thirst for learning about the theoretical rigor of data science."
~Choice Review, S-T. Kim, North Carolina A&T State University, August 2021
Descriere
Gives a comprehensive and systematic account of high-dimensional data analysis, including variable selection via regularization methods and sure independent feature screening methods. It is a valuable reference for researchers involved with model selection, variable selection, machine learning, and risk management.