An Information-Theoretic Approach to Neural Computing

Overview

A detailed formulation of neural networks from the information-theoretic viewpoint. The authors show how this perspective provides new insights into the design theory of neural networks. In particular they demonstrate how these methods may be applied to the topics of supervised and unsupervised learning, including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from...
See more details below
Paperback (Softcover reprint of the original 1st ed. 1996)
$36.53
BN.com price
(Save 26%)$49.99 List Price
Other sellers (Paperback)
  • All (7) from $32.89   
  • New (5) from $32.89   
  • Used (2) from $75.13   
Sending request ...

Overview

A detailed formulation of neural networks from the information-theoretic viewpoint. The authors show how this perspective provides new insights into the design theory of neural networks. In particular they demonstrate how these methods may be applied to the topics of supervised and unsupervised learning, including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from varied scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this an extremely valuable introduction to this topic.
Read More Show Less

Product Details

  • ISBN-13: 9781461284697
  • Publisher: Springer New York
  • Publication date: 7/31/2012
  • Series: Perspectives in Neural Computing Series
  • Edition description: Softcover reprint of the original 1st ed. 1996
  • Edition number: 1
  • Pages: 262
  • Product dimensions: 6.14 (w) x 9.21 (h) x 0.59 (d)

Table of Contents

1 Introduction.- 2 Preliminaries of Information Theory and Neural Networks.- 2.1 Elements of Information Theory.- 2.1.1 Entropy and Information.- 2.1.2 Joint Entropy and Conditional Entropy.- 2.1.3 Kullback-Leibler Entropy.- 2.1.4 Mutual Information.- 2.1.5 Differential Entropy, Relative Entropy and Mutual Information.- 2.1.6 Chain Rules.- 2.1.7 Fundamental Information Theory Inequalities.- 2.1.8 Coding Theory.- 2.2 Elements of the Theory of Neural Networks.- 2.2.1 Neural Network Modeling.- 2.2.2 Neural Architectures.- 2.2.3 Learning Paradigms.- 2.2.4 Feedforward Networks: Backpropagation.- 2.2.5 Shastic Recurrent Networks: Boltzmann Machine.- 2.2.6 Unsupervised Competitive Learning.- 2.2.7 Biological Learning Rules.- I: Unsupervised Learning.- 3 Linear Feature Extraction: Infomax Principle.- 3.1 Principal Component Analysis: Statistical Approach.- 3.1.1 PCA and Diagonalization of the Covariance Matrix.- 3.1.2 PCA and Optimal Reconstruction.- 3.1.3 Neural Network Algorithms and PCA.- 3.2 Information Theoretic Approach: Infomax.- 3.2.1 Minimization of Information Loss Principle and Infomax Principle.- 3.2.2 Upper Bound of Information Loss.- 3.2.3 Information Capacity as a Lyapunov Function of the General Shastic Approximation.- 4 Independent Component Analysis: General Formulation and Linear Case.- 4.1 ICA-Definition.- 4.2 General Criteria for ICA.- 4.2.1 Cumulant Expansion Based Criterion for ICA.- 4.2.2 Mutual Information as Criterion for ICA.- 4.3 Linear ICA.- 4.4 Gaussian Input Distribution and Linear ICA.- 4.4.1 Networks With Anti-Symmetric Lateral Connections.- 4.4.2 Networks With Symmetric Lateral Connections.- 4.4.3 Examples of Learning with Symmetric and Anti-Symmetric Networks.- 4.5 Learning in Gaussian ICA with Rotation Matrices: PCA.- 4.5.1 Relationship Between PCA and ICA in Gaussian Input Case.- 4.5.2 Linear Gaussian ICA and the Output Dimension Reduction.- 4.6 Linear ICA in Arbitrary Input Distribution.- 4.6.1 Some Properties of Cumulants at the Output of a Linear Transformation.- 4.6.2 The Edgeworth Expansion Criteria and Theorem 4.6.2.- 4.6.3 Algorithms for Output Factorization in the Non-Gaussian Case.- 4.6.4 Experimental Results of Linear ICA Algorithms in the Non-Gaussian Case.- 5 Nonlinear Feature Extraction: Boolean Shastic Networks.- 5.1 Infomax Principle for Boltzmann Machines.- 5.1.1 Learning Model.- 5.1.2 Examples of Infomax Principle in Boltzmann Machine.- 5.2 Redundancy Minimization and Infomax for the Boltzmann Machine.- 5.2.1 Learning Model.- 5.2.2 Numerical Complexity of the Learning Rule.- 5.2.3 Factorial Learning Experiments.- 5.2.4 Receptive Fields Formation from a Retina.- 5.3 Appendix.- 6 Nonlinear Feature Extraction: Deterministic Neural Networks.- 6.1 Redundancy Reduction by Triangular Volume Conserving Architectures.- 6.1.1 Networks with Linear, Sigmoidal and Higher Order Activation Functions.- 6.1.2 Simulations and Results.- 6.2 Unsupervised Modeling of Chaotic Time Series.- 6.2.1 Dynamical System Modeling.- 6.3 Redundancy Reduction by General Symplectic Architectures.- 6.3.1 General Entropy Preserving Nonlinear Maps.- 6.3.2 Optimizing a Parameterized Symplectic Map.- 6.3.3 Density Estimation and Novelty Detection.- 6.4 Example: Theory of Early Vision.- 6.4.1 Theoretical Background.- 6.4.2 Retina Model.- II: Supervised Learning.- 7 Supervised Learning and Statistical Estimation.- 7.1 Statistical Parameter Estimation — Basic Definitions.- 7.1.1 Cramer-Rao Inequality for Unbiased Estimators.- 7.2 Maximum Likelihood Estimators.- 7.2.1 Maximum Likelihood and the Information Measure.- 7.3 Maximum A Posteriori Estimation.- 7.4 Extensions of MLE to Include Model Selection.- 7.4.1 Akaike’s Information Theoretic Criterion (AIC).- 7.4.2 Minimal Description Length and Shastic Complexity.- 7.5 Generalization and Learning on the Same Data Set.- 8 Statistical Physics Theory of Supervised Learning and Generalization.- 8.1 Statistical Mechanics Theory of Supervised Learning.- 8.1.1 Maximum Entropy Principle.- 8.1.2 Probability Inference with an Ensemble of Networks.- 8.1.3 Information Gain and Complexity Analysis.- 8.2 Learning with Higher Order Neural Networks.- 8.2.1 Partition Function Evaluation.- 8.2.2 Information Gain in Polynomial Networks.- 8.2.3 Numerical Experiments.- 8.3 Learning with General Feedforward Neural Networks.- 8.3.1 Partition Function Approximation.- 8.3.2 Numerical Experiments.- 8.4 Statistical Theory of Unsupervised and Supervised Factorial Learning.- 8.4.1 Statistical Theory of Unsupervised Factorial Learning.- 8.4.2 Duality Between Unsupervised and Maximum Likelihood Based Supervised Learning.- 9 Composite Networks.- 9.1 Cooperation and Specialization in Composite Networks.- 9.2 Composite Models as Gaussian Mixtures.- 10 Information Theory Based Regularizing Methods.- 10.1 Theoretical Framework.- 10.1.1 Network Complexity Regulation.- 10.1.2 Network Architecture and Learning Paradigm.- 10.1.3 Applications of the Mutual Information Based Penalty Term.- 10.2 Regularization in Shastic Potts Neural Network.- 10.2.1 Neural Network Architecture.- 10.2.2 Simulations.- References.
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)