Cantitate/Preț
Produs

Distribution-Free Statistical Methods, Second Edition: Chapman & Hall/CRC Monographs on Statistics and Applied Probability

Autor J.S. Maritz
en Limba Engleză Hardback – apr 1995
Distribution-free statistical methods enable users to make statistical inferences with minimum assumptions about the population in question. They are widely used, especially in the areas of medical and psychological research.
This new edition is aimed at senior undergraduate and graduate level. It also includes a discussion of new techniques that have arisen as a result of improvements in statistical computing. Interest in estimation techniques has particularly grown, and this section of the book has been expanded accordingly. Finally, Distribution-Free Statistical Methods includes more examples with actual data sets appearing in the text.
Citește tot Restrânge

Din seria Chapman & Hall/CRC Monographs on Statistics and Applied Probability

Preț: 86939 lei

Preț vechi: 117744 lei
-26% Nou

Puncte Express: 1304

Preț estimativ în valută:
16637 17498$ 13901£

Carte tipărită la comandă

Livrare economică 08-22 ianuarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780412552601
ISBN-10: 0412552604
Pagini: 268
Dimensiuni: 138 x 216 x 17 mm
Greutate: 0.39 kg
Ediția:2Revizuită
Editura: CRC Press
Colecția Chapman and Hall/CRC
Seria Chapman & Hall/CRC Monographs on Statistics and Applied Probability


Public țintă

Professional

Cuprins

1 Basic concepts in distribution-free methods -- 1.1 Introduction -- 1.2 Randomization and exact tests -- 1.3 Test statistics and estimating equations -- 1.4 Consistency in the one parameter case -- 1.5 Confidence limits -- 1.6 Efficiency considerations in the one parameter case -- 1.6.1 Estimation -- 1.6.2 Hypothesis testing -- 1.7 Estimation of standard errors -- 1.8 Multiple samples and parameters -- 1.8.1 Introduction -- 1.8.2 Point estimation -- 1.8.3 Hypothesis testing -- 1.8.4 Confidence regions -- 1.9 Normal approximations -- 1.9.1 The need for normal and related approximations -- 1.9.2 The central limit theorem -- 1.9.3 Sampling from finite populations -- 1.9.4 Linear rank statistics -- 2 One-sample location problems -- 2.1 Introduction -- 2.1.1 The mean -- 2.1.2 The median -- 2.1.3 Other measures of location -- 2.2 The median -- 2.2.1 The sign statistic -- 2.2.2 The null distribution of the sign statistic -- 2.2.3 Hypothesis testing -- 2.2.4 Confidence limits for ø -- 2.2.5 Point estimation of ø -- 2.2.6 Estimating the standard error of the sample median -- 2.2.7 Efficiency considerations -- 2.2.8 Computational notes -- 2.3 Symmetric distributions -- 2.4 The mean statistic -- 2.4.1 Hypothesis testing -- 2.4.2 Confidence limits -- 2.4.3 Normal approximations -- 2.4.4 Point estimation and efficiency considerations -- 2.4.5 Computational note -- 2.5 The Wilcoxon signed rank statistic -- 2.5.1 The null distribution of W -- 2.5.2 Hypothesis testing -- 2.5.3 Confidence limits -- 2.5.4 Point estimation based on W -- 2.5.5 Efficiency -- 2.5.6 Estimating the variance of the Hodges-Lehmann estimate -- 2.5.7 Computational notes -- 2.6 Other rank based transformations -- 2.6.1 Scores based directly on ranks -- 2.6.2 The null distribution of w, -- 2.6.3 Hypothesis testing -- 2.6.4 Confidence limits -- 2.6.5 Point estimation -- 2.6.6 Efficiency -- 2.6.7 Optimum rank statistics -- 2.7 Robust transformations -- 2.8 M-estimates -- 2.8.1 Hypothesis testing and confidence limits -- 2.8.2 Point estimation and efficiency -- 2.9 M-estimation and scaling -- 2.9.1 Hypothesis testing -- 2.9.2 Point estimation and confidence limits -- 2.9.3 Estimating the variance of an M -estimate -- 2.10 L-estimates -- 2.11 Ties -- 2.12 Asymmetric distributions: M-estimates -- 3 Miscellaneous one-sample problems -- 3.1 Introduction -- 3.2 Dispersion: the interquartile range -- 3.2.1 Symmetric F, known location -- 3.2.2 General F -- 3.3 The sample distribution function -- 3.3.1 One-sided confidence bands for F -- 3.4 Estimation of densities -- 3.4.1 Estimation of F when some observations are censored -- 3.4.2 The actuarial method of estimating F -- 3.4.3 The product-limit estimate ofF -- 3.5 Paired comparisons -- 3.5.1 Signed rank tests -- 3.5.2 Sign tests -- 4 Two-sample problems -- 4.1 Types of two-sample problems -- 4.2 The basic randomization argument -- 4.3 Inference about location difference -- 4.3.1 Introduction -- 4.3.2 The two-sample mean statistic -- 4.3.3 The two-sample sign statistic -- 4.3.4 The two-sample rank sum statistic -- 4.3.5 1\vo-sample transformed rank statistics -- 4.3.6 Robust transformations in the two-sample case -- 4.4 Multiplicative models -- 4.5 Proportional hazards (Lehmann alternative) -- 4.5.1 The Wilcoxon statistic and inference about a -- 4.5.2 The 'log-rank' test and inference about a -- 4.5.3 Conditional likelihood and the log-rank test -- 4.5.4 The log-rank test and censored observations -- 4.6 Dispersion alternatives -- 4.6.1 A randomized exact test of dispersion -- 4.6.2 Comparing interquartile ranges -- 4.6.3 Rank test for dispersion -- 5 Straight line regression -- 5.1 The model and some preliminaries -- 5.2 Inference about f3 only -- 5.2.1 Inference based on untransformed residuals -- 5.2.2 Rank transformation of residuals -- 5.2.3 Sign transformation -- 5.2.4 Optimal weights for statistics of type T -- 5.2.5 Theil's statistic, Kendall's rank correlation -- 5.2.6 Robust transfonnations -- 5.2.7 Computational notes -- 5.3 Joint inference about a and fJ -- 5.3.1 Median regression -- 5.3.2 Symmetric untransfonned residuals -- 5.3.3 Symmetric residuals: signed rank method -- 5.3.4 Symmetric residuals: scores based on ranks -- 5.3.5 Symmetric residuals: robust transformations -- 6 Multiple regression and general linear models -- 6.1 Introduction -- 6.2 Plane regression: two independent variables -- 6.2.1 Inference about slopes: joint conditional distributiom -- ~ 6.3 Rank statistics for slopes -- 6.4 Sign statistics for slopes -- 6.4.1 Joint confidence regions -- 6.4.2 Point estimation -- 6.4.3 Consistency and efficiency -- 6.4.4 Estimating the covariance matrix -- 6.5 Inference about intercepts and slopes -- 6.5.1 Sign statistics -- 6.5.2 Symmetric residuals -- 6.5.3 Signed ranks -- 6.5.4 Robust transforms -- 6.6 General linear models -- 6.7 Inference about slopes only -- 6.7.1 Mean statistics -- 6.7.2 One-way analysis of variance -- 6.7.3 Randomized blocks: two-way analysis of variance -- 6.8 Exact inference using restricted randomization -- 6.8.1 Inference about individual regression coefficients -- 6.8.2 Rank transfonnations -- 6.8.3 Grouping and restricted randomization -- 7 Bivariate problems -- 7.1 Introduction -- 7.2 Tests of correlation -- 7.2.1 Conditional pennutation tests: the product moment correlation coefficient -- 7.2.2 Rank transfonnation: Spearman correlation -- 7.2.3 Sign transfonnation -- 7.2.4 Kendall's 1" and Theil's statistic -- 7.2.5 Mean square successive difference test -- 7.2.6 Contingency tables, correlation ratios -- 7.2.7 Computational notes -- 7.3 One-sample location -- 7.3.1 Medians -- 7.3.2 Hypothesis testing -- 7.3.3 Confidence regions -- 7.3.4 Point estimation -- 7.3.5 Symmetric distributions -- 7.3.6 Hypothesis testing -- 7.3.7 Confidence regions -- 7.3.8 Point estimation -- 7.3.9 Symmetric distributions: transfonnation of observations -- 7.3.10 Symmetric distributions: sign statistics -- 7.3.11 Symmetric distributions: rank statistics -- 7.3.12 Hypothesis testing and confidence limits -- 7.3.13 Point estimation -- 7.4 Two-sample location problems -- 7.4.1 Introduction: randomization -- 7.4.2 Medians and sign tests -- 7.4.3 Testing a specified (Bx. By) -- 7.4.4 Confidence regions and point estimation -- 7.4.5 Mean statistics -- 7.4.6 Hypothesis testing and confidence limits -- 7.4.7 Point estimation -- 7.4.8 Alternative calculation of QA -- 7.4.9 Rank statistics -- 7.4.10 Confidence regions, point estimation -- 7.4.11 Other transfonnations of (u;, v;) -- 7.5 Three-sample location problems -- 8 Miscellaneous complements -- 8.1 Linearization representation -- 8.2 Asymptotic relative efficiency -- 8.3 Estimating equations and the smoothing of statistics -- 8.4 Least squares smoothing -- 8.5 Kernel gradient estimates -- 8.5.1 Kernel density estimation -- 8.5.2 Estimating the mean density -- 8.6 Bootstrap estimation of standard errors -- 8.7 Conditional standard errors -- 8.7.1 Introduction and definitions -- 8.7.2 Large sample calculations -- References -- Index.

Recenzii

"In summary, the book is both readable and informative. It probably covers more material than would normally be included in a standard course on distribution-free methods, but it would be a useful reference for any student following such a course."
-The Statistician

Notă biografică

Johannes Maritz is professor in the Department of Statistics , University of Stellenbosch, South Africa.

Descriere

Distribution-free statistical methods enable users to make statistical inferences with minimum assumptions about the population in question