Introduction to Statistical Modelling and Inference
Autor Murray Aitkinen Limba Engleză Hardback – 30 sep 2022
computational methods for analysing them. There are two different kinds of methods to aid this. The
model-based method uses probability models and likelihood and Bayesian theory, while the model-free
method does not require a probability model, likelihood or Bayesian theory. These two approaches
are based on different philosophical principles of probability theory, espoused by the famous
statisticians Ronald Fisher and Jerzy Neyman.
Introduction to Statistical Modelling and Inference covers simple experimental and survey designs,
and probability models up to and including generalised linear (regression) models and some
extensions of these, including finite mixtures. A wide range of examples from different application
fields are also discussed and analysed. No special software is used, beyond that needed for maximum
likelihood analysis of generalised linear models. Students are expected to have a basic
mathematical background in algebra, coordinate geometry and calculus.
Features
• Probability models are developed from the shape of the sample empirical cumulative distribution
function (cdf) or a transformation of it.
• Bounds for the value of the population cumulative distribution function are obtained from the
Beta distribution at each point of the empirical cdf.
• Bayes’s theorem is developed from the properties of the screening test for a rare condition.
• The multinomial distribution provides an always-true model for any randomly sampled data.
• The model-free bootstrap method for finding the precision of a sample estimate has a model-based
parallel – the Bayesian bootstrap – based on the always-true multinomial distribution.
• The Bayesian posterior distributions of model parameters can be obtained from the maximum
likelihood analysis of the model.
This book is aimed at students in a wide range of disciplines including Data Science. The book is
based on the model-based theory, used widely by scientists in many fields, and compares it, in less
detail, with the model-free theory, popular in computer science, machine learning and official
survey analysis. The development of the model-based theory is accelerated by recent developments
in Bayesian analysis.
Preț: 674.31 lei
Preț vechi: 793.31 lei
-15% Nou
Puncte Express: 1011
Preț estimativ în valută:
129.07€ • 135.34$ • 106.64£
129.07€ • 135.34$ • 106.64£
Carte tipărită la comandă
Livrare economică 29 ianuarie-12 februarie 25
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781032105710
ISBN-10: 1032105712
Pagini: 390
Ilustrații: 66 Tables, black and white; 72 Line drawings, color; 150 Line drawings, black and white; 72 Illustrations, color; 150 Illustrations, black and white
Dimensiuni: 178 x 254 x 26 mm
Greutate: 0.96 kg
Ediția:1
Editura: CRC Press
Colecția Chapman and Hall/CRC
ISBN-10: 1032105712
Pagini: 390
Ilustrații: 66 Tables, black and white; 72 Line drawings, color; 150 Line drawings, black and white; 72 Illustrations, color; 150 Illustrations, black and white
Dimensiuni: 178 x 254 x 26 mm
Greutate: 0.96 kg
Ediția:1
Editura: CRC Press
Colecția Chapman and Hall/CRC
Notă biografică
Murray Aitkin earned his BSc, PhD, and DSc from Sydney University in Mathematical Statistics. Dr Aitkin completed his post-doctoral work at the Psychometric Laboratory, University Of North Carolina, Chapel Hill. He has held Teaching/lecturing positions at Virginia Polytechnic Institute, the University of New South Wales, and Macquarie University along with research professor positions at Lancaster University (3 years, UK Social Science Research Council) and the University of Western Australia (5 years, Australian Research Council). He has been a Professor of Statistics at Lancaster University, Tel Aviv University and the University of Newcastle UK.
He has been a visiting researcher and also held consulting positions at the Educational Testing Service (Fulbright Senior Fellow 1971-2 and Senior Statistician 1988-89). He was the Chief Statistician 2000 – 2002 at the Education Statistics Services Institute, American Institutes for Research, Washington DC and advisor to the National Center for Education Statistics, US Department
of Education.
He is a Fellow of American Statistical Association; Elected Member at International Statistical Institute, and a Honorary member of Statistical Modelling Society.
He is a Honorary Professorial Associate at the University of Melbourne: Department of Psychology 2004 - 2008, Department (now School) of Mathematics and Statistics 2008 – current.
He has been a visiting researcher and also held consulting positions at the Educational Testing Service (Fulbright Senior Fellow 1971-2 and Senior Statistician 1988-89). He was the Chief Statistician 2000 – 2002 at the Education Statistics Services Institute, American Institutes for Research, Washington DC and advisor to the National Center for Education Statistics, US Department
of Education.
He is a Fellow of American Statistical Association; Elected Member at International Statistical Institute, and a Honorary member of Statistical Modelling Society.
He is a Honorary Professorial Associate at the University of Melbourne: Department of Psychology 2004 - 2008, Department (now School) of Mathematics and Statistics 2008 – current.
Cuprins
Preface. 1.1. What is Statistical Modelling? 1.2. What is Statistical Analysis? 1.3. What is Statistical Inference? 1.4. Why this book? 1.5. Why the focus on the Bayesian approach? 1.6. Coverage of this book. 1.7. Recent changes in technology. 1.8. Aims of the course. 2. What is (or are) Big Data? 3. Data and research studies. 3.1. Lifetimes of radio transceivers. 3.2. Clustering of V1 missile hits in South London. 3.3. Court case on vaccination risk. 3.4. Clinical trial of Depepsen for the treatment of duodenal ulcers. 3.5. Effectiveness of treatments for respiratory distress in newborn babies. 3.6. Vitamin K. 3.7. Species counts. 3.8. Toxicology in small animal experiments. 3.9. Incidence of Down’s syndrome in four regions. 3.10. Fish species in lakes. 3.11. Absence from school. 3.12. Hostility in husbands of suicide attempters. 3.13. Tolerance of racial intermarriage. 3.14. Hospital bed use. 3.15. Dugong growth. 3.16. Simulated motorcycle collision. 3.17. Global warming. 3.18. Social group membership. 4. The StatLab data base. 4.1. Types of variables. 4.2. StatLab population questions. 5. Sample surveys – should we believe what we read? 5.1. Women and Love. 5.2. Would you have children? 5.3. Representative sampling. 5.4. Bias in the Newsday sample. 5.5. Bias in the Women and Love sample. 6. Probability. 6.1. Relative frequency. 6.2. Degree of belief. 6.3. StatLab dice sampling. 6.4. Computer sampling. 6.5. Probability for sampling. 6.6. Probability axioms. 6.7. Screening tests and Bayes’s theorem. 6.8. The misuse of probability in the Sally Clark case. 6.9. Random variables and their probability distributions. 6.10. Sums of independent random variables. 7. Statistical inference I – discrete distributions. 7.1. Evidence-based policy. 7.2. The basis of statistical inference. 7.3. The survey sampling approach. 7.4. Model-based inference theories. 7.5. The likelihood function. 7.6. Binomial distribution. 7.7. Frequentist theory. 7.8. Bayesian theory. 7.9. Inferences from posterior sampling. 7.10. Sample design. 7.11. Parameter transformations. 7.12. The Poisson distribution. 7.13. Categorical variables.7.14. Maximum likelihood. 7.15. Bayesian analysis. 8. Comparison of binomials: the Randomised Clinical Trial. 8.1. Definition. 8.2. Example – RCT of Depepsen for the treatment of duodenal ulcers. 8.3. Monte Carlo simulation. 8.4. RCT continued. 8.5. Bayesian hypothesis testing/model comparison. 8.6. Other measures of treatment difference. 8.7. The ECMO trials. 9. Data visualisation. 9.1. The histogram. 9.2. The empirical mass and cumulative distribution functions. 9.3. Probability models for continuous variables. 10. Statistical Inference II – the continuous exponential, Gaussian and uniform distributions. 10.1. The exponential distribution. 10.2. The exponential likelihood. 10.3. Frequentist theory. 10.4. Bayesian theory. 10.5. The Gaussian distribution. 10.6. The Gaussian likelihood function. 10.7. Frequentist inference. 10.8. Bayesian inference. 10.9. Hypothesis testing. 10.10. Frequentist hypothesis testing. 10.11. Bayesian hypothesis testing. 10.12. Pivotal functions. 10.13. Conjugate priors. 10.14. The uniform distribution. 11. Statistical Inference III – two-parameter continuous distributions. 11.1. The Gaussian distribution. 11.2. Frequentist analysis. 11.3. Bayesian analysis. 11.4. The lognormal distribution. 11.5. The Weibull distribution. 11.6. The gamma distribution. 11.7. The gamma likelihood. 12. Model assessment. 12.1. Gaussian model assessment. 12.2. Lognormal model assessment. 12.3. Exponential model assessment. 12.4. Weibull model assessment. 12.5. Gamma model assessment. 13. The multinomial distribution. 13.1. The multinomial likelihood. 13.2. Frequentist analysis. 13.3. Bayesian analysis. 13.4. Criticisms of the Haldane prior. 13.5. Inference for multinomial quantiles. 13.6. Dirichlet posterior weighting. 13.7. The frequentist bootstrap. 13.8. Stratified sampling and weighting. 14. Model comparison and model averaging. 14.4. The deviance. 14.5. Asymptotic distribution of the deviance. 14.6. Nested models. 14.7. Model choice and model averaging. 15. Gaussian linear regression models. 15.1. Simple linear regression. 15.2. Model assessment through residual examination. 15.3. Likelihood for the simple linear regression model. 15.4. Maximum likelihood. 15.5. Bayesian and frequentist inferences. 15.6. Model-robust analysis. 15.7. Correlation and prediction. 15.8. Probability model assessment. 15.9. "Dummy variable" regression. 15.10. Two-variable models. 15.11. Model assumptions. 15.12. The p-variable linear model. 15.13. The Gaussian multiple regression likelihood. 15.14. Interactions. 15.15. Ridge regression, the Lasso and the "elastic net". 15.16. Modelling boy birthweights. 15.17. Modelling girl intelligence at age 10 and family income 15.18. Modelling of the hostility data. 15.19. Principal component regression. 16. Incomplete data and their analysis with the EM and DA algorithms. 16.1. The general incomplete data model. 16.2. The EM algorithm. 16.3. Missingness. 16.4. Lost data. 16.5. Censoring in the exponential distribution. 16.6. Randomly missing Gaussian observations. 16.7. Missing responses and/or covariates in simple and multiple regression. 16.8. Mixture distributions. 16.9. Bayesian analysis and the Data Augmentation algorithm. 17. Generalised linear models (GLMs). 17.1. The exponential family. 17.2. Maximum likelihood 17.3 The GLM algorithm. 17.4. Bayesian package development. 17.5. Bayesian analysis from ML. 17.6. Binary response models. 17.7. The menarche data. 17.8. Poisson regression – fish species frequency. 17.9. Gamma regression. 18. Extensions of GLMs. 18.1. Double GLMs. 18.2. Maximum likelihood. 18.3. Bayesian analysis. 18.4. Segmented or broken-stick regressions. 18.5. Heterogeneous regressions. 18.6. Highly non-linear functions. 18.7. Neural networks. 18.8. Social networks and social group membership. 18.9. The motorcycle data. 19. Appendix 1 – length-biased sampling. 20. Appendix 2 – Two-component Gaussian mixture. 21. Appendix 3 – StatLab Variables. 22. Appendix 4 – a short history of statistics from 1890.
Descriere
The book is based on the model-based theory, used widely by scientists in many fields. It covers simple experimental and survey designs, and probability models up to and including generalised linear (regression) models and some extensions of these, including finite mixtures.