Learning and Generalisation: With Applications to Neural Networks
Learning and Generalization provides a formal mathematical theory addressing intuitive questions of the type:

• How does a machine learn a concept on the basis of examples?

• How can a neural network, after training, correctly predict the outcome of a previously unseen input?

• How much training is required to achieve a given level of accuracy in the prediction?

• How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite time?

The second edition covers new areas including:

• support vector machines;

• fat-shattering dimensions and applications to neural network learning;

• learning with dependent samples generated by a beta-mixing process;

• connections between system identification and learning theory;

• probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithms.

It also contains solutions to some of the open problems posed in the first edition, while adding new open problems.

1116829744
Learning and Generalisation: With Applications to Neural Networks
Learning and Generalization provides a formal mathematical theory addressing intuitive questions of the type:

• How does a machine learn a concept on the basis of examples?

• How can a neural network, after training, correctly predict the outcome of a previously unseen input?

• How much training is required to achieve a given level of accuracy in the prediction?

• How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite time?

The second edition covers new areas including:

• support vector machines;

• fat-shattering dimensions and applications to neural network learning;

• learning with dependent samples generated by a beta-mixing process;

• connections between system identification and learning theory;

• probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithms.

It also contains solutions to some of the open problems posed in the first edition, while adding new open problems.

199.99 In Stock
Learning and Generalisation: With Applications to Neural Networks

Learning and Generalisation: With Applications to Neural Networks

by Mathukumalli Vidyasagar
Learning and Generalisation: With Applications to Neural Networks

Learning and Generalisation: With Applications to Neural Networks

by Mathukumalli Vidyasagar

Hardcover(Second Edition 2003)

$199.99 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Learning and Generalization provides a formal mathematical theory addressing intuitive questions of the type:

• How does a machine learn a concept on the basis of examples?

• How can a neural network, after training, correctly predict the outcome of a previously unseen input?

• How much training is required to achieve a given level of accuracy in the prediction?

• How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite time?

The second edition covers new areas including:

• support vector machines;

• fat-shattering dimensions and applications to neural network learning;

• learning with dependent samples generated by a beta-mixing process;

• connections between system identification and learning theory;

• probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithms.

It also contains solutions to some of the open problems posed in the first edition, while adding new open problems.


Product Details

ISBN-13: 9781852333737
Publisher: Springer London
Publication date: 11/11/2002
Series: Communications and Control Engineering
Edition description: Second Edition 2003
Pages: 488
Product dimensions: 6.10(w) x 9.25(h) x 0.36(d)

Table of Contents

1. Introduction.- 2. Preliminaries.- 3. Problem Formulations.- 4. Vapnik-Chervonenkis, Pseudo- and Fat-Shattering Dimensions.- 5. Uniform Convergence of Empirical Means.- 6. Learning Under a Fixed Probability Measure.- 7. Distribution-Free Learning.- 8. Learning Under an Intermediate Family of Probabilities.- 9. Alternate Models of Learning.- 10. Applications to Neural Networks..- 11. Applications to Control Systems.- 12. Some Open Problems.
From the B&N Reads Blog

Customer Reviews