Cantitate/Preț
Produs

Pattern Classifiers and Trainable Machines

Autor J. Sklansky, G.N. Wassel
en Limba Engleză Paperback – 12 oct 2011
This book is the outgrowth of both a research program and a graduate course at the University of California, Irvine (UCI) since 1966, as well as a graduate course at the California State Polytechnic University, Pomona (Cal Poly Pomona). The research program, part of the UCI Pattern Recogni­ tion Project, was concerned with the design of trainable classifiers; the graduate courses were broader in scope, including subjects such as feature selection, cluster analysis, choice of data set, and estimates of probability densities. In the interest of minimizing overlap with other books on pattern recogni­ tion or classifier theory, we have selected a few topics of special interest for this book, and treated them in some depth. Some of this material has not been previously published. The book is intended for use as a guide to the designer of pattern classifiers, or as a text in a graduate course in an engi­ neering or computer science curriculum. Although this book is directed primarily to engineers and computer scientists, it may also be of interest to psychologists, biologists, medical scientists, and social scientists.
Citește tot Restrânge

Preț: 37670 lei

Nou

Puncte Express: 565

Preț estimativ în valută:
7209 7624$ 6013£

Carte tipărită la comandă

Livrare economică 30 decembrie 24 - 13 ianuarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781461258407
ISBN-10: 1461258405
Pagini: 352
Ilustrații: XII, 336 p.
Dimensiuni: 155 x 235 x 18 mm
Greutate: 0.49 kg
Ediția:Softcover reprint of the original 1st ed. 1981
Editura: Springer
Colecția Springer
Locul publicării:New York, NY, United States

Public țintă

Research

Cuprins

1 Introduction and Overview.- 1.1 Basic Definitions.- 1.2 Trainable Classifiers and Training Theory.- 1.3 Assumptions and Notation.- 1.4 Illustrative Training Process.- 1.5 Linear Discriminant Functions.- 1.6 Expanding the Feature Space.- 1.7 Binary-Input Classifiers.- 1.8 Weight Space Versus Feature Space.- 1.9 Statistical Models.- 1.10 Evaluation of Performance.- 2 Linearly Separable Classes.- 2.1 Introduction.- 2.2 Convex sets, Summability, and Linear Separability.- 2.3 Notation and Terminology.- 2.4 The Perceptron and the Proportional Increment Training Procedure.- 2.5 The Fixed Fraction Training Procedure.- 2.6 A Multiclass Training Procedure.- 2.7 Synthesis by Game Theory.- 2.8 Symplifying Techniques.- 2.9 Illustrative Example.- 2.10 Gradient Descent.- 2.11 Conditions for Ensuring Desired Convergence.- 2.12 Gradient Descent for Designing Classifiers.- 2.13 The Ho—Kashyap Procedure.- 3 Nonlinear Classifiers.- 3.1 Introduction.- 3.2 ?-Classifiers.- 3.3 Bayes Estimation: Parametric Training.- 3.4 Smoothing Techniques: Nonparametric Training.- 3.5 Bar Graphs.- 3.6 Parzen Windows and Potential Functions.- 3.7 Storage Economies.- 3.8 Fixed-Base Bar Graphs.- 3.9 Sample Sets and Prototypes.- 3.10 Close Opposed Pairs of Prototypes.- 3.11 Locally Trained Piecewise Linear Classifiers.- 4 Loss Functions and Stochastic Approximation.- 4.1 Introduction.- 4.2 A Loss Function for the Proportional Increment Procedure.- 4.3 The Sample Gradient.- 4.4 The Use of Prior Knowledge.- 4.5 Loss Functions and Gradients of Some Important Training Procedures.- 4.6 Loss Functions Compared.- 4.7 Unequal Costs of Category Decisions.- 4.8 Stochastic Approximation.- 4.9 Gradients for Various Constituent Densities and Hyperplanes.- 4.10 Conclusion.- 5 Linear Classifiers for NonseparableClasses.- 5.1 Modifications of Gradient Descent.- 5.2 Normalization, Origin Selection, and Initial Vector.- 5.3 The Window Training Procedure.- 5.4 The Minimum Mean Square Error Training Procedure.- 5.5 The Equalized Error Training Procedure.- 5.6 Accounting for Unequal Costs.- 5.7 An Application.- 5.8 Summary.- 6 Markov Chain Training Models for Nonseparable Classes.- 6.1 Introduction.- 6.2 The Problem of Analyzing a Stochastic Difference Equation.- 6.3 Examples of Single-Feature Classifiers.- 6.4 A Single-Feature Classifier with Constant Increment Training.- 6.5 Basic Properties of Learning Dynamics.- 6.6 Erogodicity and Stability in the Large.- 6.7 Train-Work Schedules: Two-Mode Classes.- 6.8 Optimal Finite Memory Learning.- 6.9 Multidimensional Feature Space.- 7 Continuous-State Models.- 7.1 Introduction.- 7.2 The Centroid Equation.- 7.3 Proof that ?(n) = O(?)U for n? ? t < ?.- 7.4 The Covariance Equation.- 7.5 Learning Curves and Variance Curves.- 7.6 Normalization with Respect to t.- 7.7 Illustrative Examples.- 7.8 Shapes of Learning Curves in Single-Feature Classifiers.- 7.9 How Close are the Equal Error and Minimum Error Points?.- 7.10 Asymptotic Stability in the Large.- Appendix A Vectors and Matrices.- A.1 Vector Inequalities and Other Vector Notation.- A.2 Permutation Matrices.- Appendix B Proof of Convergence for the Window Procedure.- Appendix C Proof of Convergence for the Equalized Error Procedure.- C.2 Proof of Theorem 5.3.