Fundamentals of Machine Learning

Fundamentals of Machine Learning

by Thomas Trappenberg
ISBN-10:
0198828047
ISBN-13:
9780198828044
Pub. Date:
01/28/2020
Publisher:
Oxford University Press
ISBN-10:
0198828047
ISBN-13:
9780198828044
Pub. Date:
01/28/2020
Publisher:
Oxford University Press
Fundamentals of Machine Learning

Fundamentals of Machine Learning

by Thomas Trappenberg
$54.0
Current price is , Original price is $54.0. You
$54.00 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.


Overview

Interest in machine learning is exploding worldwide, both in research and for industrial applications. Machine learning is fast becoming a fundamental part of everyday life.

This book is a brief introduction to this area - exploring its importance in a range of many disciplines, from science to engineering, and even its broader impact on our society. The book is written in a style that strikes a balance between brevity of explanation, rigorous mathematical argument, and outlines principle ideas. At the same time, it provides a comprehensive overview of a variety of methods and their application within this field. This includes an introduction to Bayesian approaches to modeling, as well as deep learning.

Writing small programs to apply machine learning techniques is made easy by high level programming systems, and this book shows examples in Python with the machine learning libraries 'sklearn' and 'Keras'. The first four chapters concentrate on the practical side of applying machine learning techniques. The following four chapters discuss more fundamental concepts that includes their formulation in a probabilistic context. This is followed by two more chapters on advanced models, that of recurrent neural networks and that of reinforcement learning. The book closes with a brief discussion on the impact of machine learning and AI on our society.

Fundamentals of Machine Learning provides a brief and accessible introduction to this rapidly growing field, one that will appeal to students and researchers across computer science and computational neuroscience, as well as the broader cognitive sciences.

Product Details

ISBN-13: 9780198828044
Publisher: Oxford University Press
Publication date: 01/28/2020
Pages: 260
Product dimensions: 9.60(w) x 7.40(h) x 0.60(d)

About the Author

Thomas Trappenberg, Professor of Computer Science, Dalhousie University

Dr. Trappenberg is a professor of Computer Science at Dalhousie University. He holds a PhD in physics from RWTH Aachen University and held research positions in Canada, Riken Japan, and Oxford England. His main research areas are computational neuroscience, machine learning and robotics. He is the author of Fundamental of Computational Neuroscience and the cofounder of Nexus Robotics and ReelData. He is currently working on applying AI to several other areas in the food industry and in medical applications.

Table of Contents

1. Introduction1.1. The basic idea and history of Machine Learning1.2. Mathematical formulation of the basic learning problem1.3. Nonlinear regression in highdimensions1.4. Recent advancements1.5. No free lunchI A PRACTICAL GUIDE TO MACHINE LEARNING2. Scientific programming with Python2.1. Programming environment2.2. Basic language elements2.3. Code efficiency and vectorization2.4. Data handling2.5. Image processing and convolutional filters3. Machine learning with sklearn3.1. Classification with SVC, RFC and MLP3.2. Performance measures and evaluations3.3. Data handling3.4. Dimensionality reduction, feature selection, and tSN3.5. Decision Trees and Random Forests *3.6. Support Vector Machines (SVM) *4. Neural Networks and Keras4.1. Neurons and the threshold perceptron4.2. Multilayer Perceptron (MLP) and Keras4.3. Representational learning4.4. Convolutional Neural Networks (CNNs)4.5. What and Where4.6. More tricks of the tradeII FOUNDATIONS: REGRESSION AND PROBABILISTIC MODELING5. Regression and optimization5.1. Linear regression and gradient descent5.2. Error surface and challenges for gradient descent5.3. Advanced gradient optimization (learning)5.4. Regularization: Ridge regression and LASSO5.5. Nonlinear regression5.6. Backpropagation5.7. Automatic differentiation6. Basic probability theory6.1. Random numbers and their probability (density) function6.2 Moments: mean, variance, etc.6.3. Examples of probability (density) functions6.4. Some advanced concepts6.5. Density functions of multiple random variables6.6. How to combine prior knowledge with new evidence7. Probabilistic regression and Bayes nets7.1. Probabilistic models7.2. Learning in probabilistic models: Maximum likelihood estimate7.3. Probabilistic classification7.4. MAP and Regularization with priors7.5. Bayes Nets: Multivariate causal modeling7.6. Probabilistic and Stochastic Neural Networks8. Generative Models8.1. Modelling classes8.2. Supervised generative models8.3. Naive Bayes8.4. Unsupervised generative models8.5. Generative Neural NetworksIII ADVANCED LEARNING MODELS9. Cyclic Models and Recurrent Neural Networks9.1. Sequence processing9.2. Simple Sequence MLP and RNN in Keras9.3. Gated RNN and attention9.4. Models with symmetric lateral connections10. Reinforcement Learning10.1. Formalization of the problem setting10.2. Modelbased Reinforcement Learning10.3. Modelfree Reinforcement Learning10.4. Deep Reinforcement Learning10.5. Actors and actorcritics11. AI, the brain, and our society11.1. Different levels of modeling and the brain11.2. Machine learning and AI11.3. The impact machine learning technology on society
From the B&N Reads Blog

Customer Reviews