The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second EditionDuring the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for ``wide'' data (p bigger than n), including multiple testing and false discovery rates. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting. |
From inside the book
Results 1-5 of 87
... criterion for choosing f, EPE(f) = E(Y − f(X))2 (2.9) = ∫ [y − f(x)] 2 Pr(dx, dy), (2.10) the expected (squared) prediction error . By conditioning1 on X, we can write EPE as EPE(f)=E X ( [Y − f(X)]2|X ) (2.11) EY|X and we see that ...
... criterion (2.11)? What happens if we replace the L2 loss function with the L1: E|Y − f(X)|? The solution in this case is the conditional median, f(x)ˆ = median(Y|X = x), (2.18) which is a different measure of location, and its ...
... criterion. With such a model it becomes natural to use least squares as a data criterion for model estimation as in (2.1). Simple modifications can be made to avoid the independence assumption; for example, we can have Var(Y|X = x) = σ ...
... criterion for an additive error model. In terms of function approximation, we imagine our parameterized function as a surface in p + 1 space, and what we observe are noisy realizations from it. This is easy to visualize when p = 2 and ...
... criterion for an arbitrary function f, RSS(f) = (yi - f(xi))2. (2.37) i=1 Minimizing (2.37) leads to infinitely many solutions: any function fˆ passing through the training points (xi ,yi) is a solution. Any particular solution chosen ...
Contents
1 | |
9 | |
43 | |
4 Linear Methods for Classification | 100 |
5 Basis Expansions and Regularization | 139 |
6 Kernel Smoothing Methods | 190 |
7 Model Assessment and Selection | 219 |
8 Model Inference and Averaging | 261 |
12 Support Vector Machines and Flexible Discriminants | 417 |
13 Prototype Methods and NearestNeighbors | 459 |
14 Unsupervised Learning | 485 |
15 Random Forests | 586 |
16 Ensemble Learning | 605 |
17 Undirected Graphical Models | 625 |
p N | 649 |
References | 699 |
9 Additive Models Trees and Related Methods | 295 |
10 Boosting and Additive Trees | 337 |
11 Neural Networks | 388 |
Author Index | 729 |
Index | 737 |