Machine Learning: A Bayesian and Optimization Perspective
Machine Learning: A Bayesian and Optimization Perspective, 2nd edition, gives a unified perspective on machine learning by covering both pillars of supervised learning, namely regression and classification. The book starts with the basics, including mean square, least squares and maximum likelihood methods, ridge regression, Bayesian decision theory classification, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines, Bayesian inference with a focus on the EM algorithm and its approximate inference variational versions, Monte Carlo methods, probabilistic graphical models focusing on Bayesian networks, hidden Markov models and particle filtering. Dimensionality reduction and latent variables modelling are also considered in depth.

This palette of techniques concludes with an extended chapter on neural networks and deep learning architectures. The book also covers the fundamentals of statistical parameter estimation, Wiener and Kalman filtering, convexity and convex optimization, including a chapter on stochastic approximation and the gradient descent family of algorithms, presenting related online learning techniques as well as concepts and algorithmic versions for distributed optimization.

Focusing on the physical reasoning behind the mathematics, without sacrificing rigor, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. Most of the chapters include typical case studies and computer exercises, both in MATLAB and Python.

The chapters are written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as courses on sparse modeling, deep learning, and probabilistic graphical models.

New to this edition:

  • Complete re-write of the chapter on Neural Networks and Deep Learning to reflect the latest advances since the 1st edition. The chapter, starting from the basic perceptron and feed-forward neural networks concepts, now presents an in depth treatment of deep networks, including recent optimization algorithms, batch normalization, regularization techniques such as the dropout method, convolutional neural networks, recurrent neural networks, attention mechanisms, adversarial examples and training, capsule networks and generative architectures, such as restricted Boltzman machines (RBMs), variational autoencoders and generative adversarial networks (GANs).
  • Expanded treatment of Bayesian learning to include nonparametric Bayesian methods, with a focus on the Chinese restaurant and the Indian buffet processes.
1120857568
Machine Learning: A Bayesian and Optimization Perspective
Machine Learning: A Bayesian and Optimization Perspective, 2nd edition, gives a unified perspective on machine learning by covering both pillars of supervised learning, namely regression and classification. The book starts with the basics, including mean square, least squares and maximum likelihood methods, ridge regression, Bayesian decision theory classification, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines, Bayesian inference with a focus on the EM algorithm and its approximate inference variational versions, Monte Carlo methods, probabilistic graphical models focusing on Bayesian networks, hidden Markov models and particle filtering. Dimensionality reduction and latent variables modelling are also considered in depth.

This palette of techniques concludes with an extended chapter on neural networks and deep learning architectures. The book also covers the fundamentals of statistical parameter estimation, Wiener and Kalman filtering, convexity and convex optimization, including a chapter on stochastic approximation and the gradient descent family of algorithms, presenting related online learning techniques as well as concepts and algorithmic versions for distributed optimization.

Focusing on the physical reasoning behind the mathematics, without sacrificing rigor, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. Most of the chapters include typical case studies and computer exercises, both in MATLAB and Python.

The chapters are written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as courses on sparse modeling, deep learning, and probabilistic graphical models.

New to this edition:

  • Complete re-write of the chapter on Neural Networks and Deep Learning to reflect the latest advances since the 1st edition. The chapter, starting from the basic perceptron and feed-forward neural networks concepts, now presents an in depth treatment of deep networks, including recent optimization algorithms, batch normalization, regularization techniques such as the dropout method, convolutional neural networks, recurrent neural networks, attention mechanisms, adversarial examples and training, capsule networks and generative architectures, such as restricted Boltzman machines (RBMs), variational autoencoders and generative adversarial networks (GANs).
  • Expanded treatment of Bayesian learning to include nonparametric Bayesian methods, with a focus on the Chinese restaurant and the Indian buffet processes.
105.0 In Stock
Machine Learning: A Bayesian and Optimization Perspective

Machine Learning: A Bayesian and Optimization Perspective

by Sergios Theodoridis
Machine Learning: A Bayesian and Optimization Perspective

Machine Learning: A Bayesian and Optimization Perspective

by Sergios Theodoridis

Hardcover(2nd ed.)

$105.00 
  • SHIP THIS ITEM
    In stock. Ships in 2-4 days.
    Not Eligible for Free Shipping
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Machine Learning: A Bayesian and Optimization Perspective, 2nd edition, gives a unified perspective on machine learning by covering both pillars of supervised learning, namely regression and classification. The book starts with the basics, including mean square, least squares and maximum likelihood methods, ridge regression, Bayesian decision theory classification, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines, Bayesian inference with a focus on the EM algorithm and its approximate inference variational versions, Monte Carlo methods, probabilistic graphical models focusing on Bayesian networks, hidden Markov models and particle filtering. Dimensionality reduction and latent variables modelling are also considered in depth.

This palette of techniques concludes with an extended chapter on neural networks and deep learning architectures. The book also covers the fundamentals of statistical parameter estimation, Wiener and Kalman filtering, convexity and convex optimization, including a chapter on stochastic approximation and the gradient descent family of algorithms, presenting related online learning techniques as well as concepts and algorithmic versions for distributed optimization.

Focusing on the physical reasoning behind the mathematics, without sacrificing rigor, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. Most of the chapters include typical case studies and computer exercises, both in MATLAB and Python.

The chapters are written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as courses on sparse modeling, deep learning, and probabilistic graphical models.

New to this edition:

  • Complete re-write of the chapter on Neural Networks and Deep Learning to reflect the latest advances since the 1st edition. The chapter, starting from the basic perceptron and feed-forward neural networks concepts, now presents an in depth treatment of deep networks, including recent optimization algorithms, batch normalization, regularization techniques such as the dropout method, convolutional neural networks, recurrent neural networks, attention mechanisms, adversarial examples and training, capsule networks and generative architectures, such as restricted Boltzman machines (RBMs), variational autoencoders and generative adversarial networks (GANs).
  • Expanded treatment of Bayesian learning to include nonparametric Bayesian methods, with a focus on the Chinese restaurant and the Indian buffet processes.

Product Details

ISBN-13: 9780128188033
Publisher: Elsevier Science
Publication date: 07/17/2020
Edition description: 2nd ed.
Pages: 1160
Product dimensions: 7.50(w) x 9.30(h) x 2.40(d)

About the Author

Sergios Theodoridis is Professor of Signal Processing and Machine Learning in the Department of Informatics and Telecommunications of the University of Athens.

He is the co-author of the bestselling book, Pattern Recognition, and the co-author of Introduction to Pattern Recognition: A MATLAB Approach.

He serves as Editor-in-Chief for the IEEE Transactions on Signal Processing, and he is the co-Editor in Chief with Rama Chellapa for the Academic

Press Library in Signal Processing.

He has received a number of awards including the 2014 IEEE Signal Processing Magazine Best Paper Award, the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding Paper Award, the 2014 IEEE Signal Processing Society Education Award, the EURASIP 2014 Meritorious Service Award, and he has served as a Distinguished Lecturer for the IEEE Signal Processing Society and the IEEE Circuits and Systems Society. He is a Fellow of EURASIP and a Fellow of IEEE.

Table of Contents

1. Introduction 2. Probability and stochastic Processes 3. Learning in parametric Modeling: Basic Concepts and Directions 4. Mean-Square Error Linear Estimation 5. Stochastic Gradient Descent: the LMS Algorithm and its Family 6. The Least-Squares Family 7. Classification: A Tour of the Classics 8. Parameter Learning: A Convex Analytic Path 9. Sparsity-Aware Learning: Concepts and Theoretical Foundations 10. Sparsity-Aware Learning: Algorithms and Applications 11. Learning in Reproducing Kernal Hilbert Spaces 12. Bayesian Learning: Inference and the EM Algorithm 13. Bayesian Learning: Approximate Inference and nonparametric Models 14. Montel Carlo Methods 15. Probabilistic Graphical Models: Part 1 16. Probabilistic Graphical Models: Part 2 17. Particle Filtering 18. Neural Networks and Deep Learning 19. Dimensionality Reduction and Latent Variables Modeling

From the B&N Reads Blog

Customer Reviews