Understanding machine learning : from theory to algorithms / Shai Shalev-Shwartz, The Hebrew University, Jerusalem, Shai Ben-David, University of Waterloo, Canada.
By: Shalev-Shwartz, Shai.
Contributor(s): Ben-David, Shai.
Description: xvi, 397 pages : illustrations ; 26 cm.ISBN: 9781107057135 (hardback); 1107057132 (hardback).Subject(s): Machine learning | Algorithms | COMPUTERS / Computer Vision & Pattern RecognitionDDC classification: 006.3/1 Other classification: COM016000 Online resources: Full-text onlineItem type | Current location | Call number | Status | Date due | Barcode | Item holds |
---|---|---|---|---|---|---|
Book | Skoltech library Shelves | Q325.5 .S475 2014 (Browse shelf) | Available | 2000007414 | ||
Book | Skoltech library Shelves | Q325.5 .S475 2014 (Browse shelf) | Available | 2000006567 |
Browsing Skoltech library Shelves , Shelving location: Shelves Close shelf browser
Q325.5 .M64 2012 Foundations of machine learning / | Q325.5 .P48 2017 Elements of causal inference : | Q325.5 .S475 2014 Understanding machine learning : | Q325.5 .S475 2014 Understanding machine learning : | Q325.6 .R45 2018 Reinforcement learning : | Q325.6 .S88 1998 Reinforcement learning : | Q325.6 .S88 1998 Reinforcement learning : |
Includes bibliographical references (pages 385-393) and index.
Machine generated contents note: 1. Introduction; Part I. Foundations: 2. A gentle start; 3. A formal learning model; 4. Learning via uniform convergence; 5. The bias-complexity tradeoff; 6. The VC-dimension; 7. Non-uniform learnability; 8. The runtime of learning; Part II. From Theory to Algorithms: 9. Linear predictors; 10. Boosting; 11. Model selection and validation; 12. Convex learning problems; 13. Regularization and stability; 14. Stochastic gradient descent; 15. Support vector machines; 16. Kernel methods; 17. Multiclass, ranking, and complex prediction problems; 18. Decision trees; 19. Nearest neighbor; 20. Neural networks; Part III. Additional Learning Models: 21. Online learning; 22. Clustering; 23. Dimensionality reduction; 24. Generative models; 25. Feature selection and generation; Part IV. Advanced Theory: 26. Rademacher complexities; 27. Covering numbers; 28. Proof of the fundamental theorem of learning theory; 29. Multiclass learnability; 30. Compression bounds; 31. PAC-Bayes; Appendix A. Technical lemmas; Appendix B. Measure concentration; Appendix C. Linear algebra.
"Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering"--
There are no comments for this item.