Advances in Neural Information Processing Systems 9: Proceedings of The 1996 Conference / Edition 1

Hardcover (Print)
Buy New
Buy New from
Used and New from Other Sellers
Used and New from Other Sellers
from $4.45
Usually ships in 1-2 business days
(Save 95%)
Other sellers (Hardcover)
  • All (10) from $4.45   
  • New (4) from $10.32   
  • Used (6) from $4.45   


The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes neural networks and genetic algorithms, cognitive science, neuroscience and biology, computer science,AI, applied mathematics, physics, and many branches of engineering. Only about 30% of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. All of the papers presented appear in these proceedings.

Genetic algorithms & explicit search statistics, time series prediction using mixtures of experts

Read More Show Less

Editorial Reviews

Some 150 papers explore a wide variety of topics in neural computation, including the design and analysis of learning algorithms, learning theory, neuroscience, vision, speech, and control theory. Applications are describes in such areas as operations research, finance, applied statistics, and experimental physics. Much attention is devoted to probabilistic methods, independent component analysis, and functional approximation in temporal difference learning. Annotation c. by Book News, Inc., Portland, Or.
Read More Show Less

Product Details

  • ISBN-13: 9780262100656
  • Publisher: MIT Press
  • Publication date: 5/5/1997
  • Series: Bradford Books Series
  • Edition description: First Edition
  • Edition number: 1
  • Pages: 1118
  • Product dimensions: 7.30 (w) x 10.10 (h) x 2.20 (d)

Meet the Author

Michael C. Mozer is a Professor in the Department of Computer Science and the Institute of Cognitive Science at the University of Colorado, Boulder. In 1990 he received the Presidential Young Investigator Award from the National Science Foundation.

Michael I. Jordan is Professor of Computer Science and of Statistics at the University of California, Berkeley, and recipient of the ACM/AAAI Allen Newell Award.

Read More Show Less

Table of Contents

NIPS Committees
Part I Cognitive Science
Text-Based Information Retrieval Using Exponentiated Gradient Descent
Why did TD-Gammon Work?
Neural Models for Part-Whole Hierarchies
Part II Neuroscience
Temporal Low-Order Statistics of Natural Sounds
Reconstructing Stimulus Velocity from Neuronal Responses in Area MT
3D Object Recognition: A Model of View-Tuned Neurons
A Hierarchical Model of Visual Rivalry
Neural Network Models of Chemotaxis in the Nematode Caenorhabditis Elegans
Extraction of Temporal Features in the Electrosensory System of Weakly Electric Fish
A Neural Model of Visual Contour Integration
Learning Exact Patterns of Quasi-synchronization among Spiking Neurons from Data on Multi-unit Recordings
Complex-Cell Responses Derived from Center-Surround Inputs: The Surprising Power of Intradendritic Computation
Orientation Contrast Sensitivity from Long-range Interactions in Visual Cortex
Statistically Efficient Estimations Using Cortical Lateral Connections
An Architectural Mechanism for Direction-tuned Cortical Simple Cells: The Role of Mutual Inhibition
Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input
A Model of Recurrent Interactions in Primary Visual Cortex
Part III Theory
Neural Learning in Structured Parameter Spaces Natural Riemannian Gradient
For Valid Generalization, the Size of the Weights is More Important than the Size of the Network
Dynamics of Training
Multilayer Neural Networks: One or Two Hidden Layers?
Support Vector Regression Machines
Size of Multilayer Networks for Exact Learning: Analytic Approach
The Effect ofCorrelated Input Data on the Dynamics of Learning
Practical Confidence and Prediction Intervals
Statistical Mechanics of the Mixture of Experts
MLP Can Probably Generalize Much Better than VC-bounds Indicate
Radial Basis Function Networks and Complexity Regularization in Function Learning
An Apobayesian Relative of Winnow
Noisy Spiking Neurons with Temporal Coding have more Computational Power than Sigmoidal Neurons
On the Effect of Analog Noise in Discrete-Time Analog Computations
A Mean Field Algorithm for Bayes Learning in Large Feed-forward Neural Networks
Removing Noise in On-Line Search using Adaptive Batch Sizes
Are Hopfield Networks Faster than Conventional Computers?
Hebb Learning of Features based on their Information Content
The Generalisation Cost of RAMnets
Learning with Noise and Regularizers in Multilayer Neural Networks
A Variational Principle for Model-based Morphing
Online Learning from Finite Training Sets: An Analytical Case Study
Support Vector Method for Function Approximation, Regression Estimation, and Signal Processing
The Learning Dynamics of a Universal Approximator
Computing with Infinite Networks
Microscopic Equations in Rough Energy Landscape for Neural Networks
Time Series Prediction using Mixtures of Experts
Part IV Algorithms and Architecture
Genetic Algorithms and Explicit Search Statistics
Consistent Classification, Firm and Soft
Bayesian Model Comparison by Monte Carlo Chaining
Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo
Regression with Input-Dependent Noise: A Bayesian Treatment
GTM: A Principled Alternative to the Self-Organizing Map
The CONDENSATION Algorithm Conditional Density Propagation and Applications to Visual Tracking
Neural Clustering via Concave Minimization
Improving the Accuracy and Speed of Support Vector Machines
Estimating Equivalent Kernels for Neural Networks: A Data Perturbation Approach
Promoting Poor Features to Supervisors: Some Inputs Work Better as Outputs
Self-Organizing and Adaptive Algorithms for Generalized Eigen-Decomposition
Representation and Induction of Finite State Machines using Time-Delay Neural Networks
Solutions to the XOR Problem
Minimizing Statistical Bias with Queries
MIMIC: Finding Optima by Estimating Probability Densities
On a Modification to the Mean Field EM Algorithm in Factorial Learning
Softening Discrete Relaxation
Limitations of Self-organizing Maps for Vector Quantization and Multidimensional Scaling
Continuous Sigmoidal Belief Networks Trained using Slice Sampling
Adaptively Growing Hierarchical Mixtures of Experts
Balancing Between Bagging and Bumping
LSTM can Solve Hard Long Time Lag Problems
One-unit Learning Rules for Independent Component Analysis
Recursive Algorithms for Approximating Probabilities in Graphical Models
Combinations of Weak Classifiers
Hidden Markov Decision Trees
Unification of Information Maximization and Minimization
Unsupervised Learning by Convex and Conic Coding
ARC-LH: A New Adaptive Resampling Algorithm for Improving ANN Classifiers
Bayesian Unsupervised Learning of Higher Order Structure
Source Separation and Density Estimation by Faithful Equivariant SOM
NeuroScale: Novel Topographic Feature Extraction using RBF Networks
Decomposition, Ordered Classes and Incomplete Examples in Classification
Triangulation by Continuous Embedding
Time-Delay Neural Combining Neural Network Regression Estimates with Regularized Linear Weights
A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data
Learning Bayesian Belief Networks with Neural Network Estimators
Smoothing Regularizers for Projective Basis Function Networks
Competition Among Networks Improves Committee Performance
Adaptive On-line Learning in Changing Environments
Using Curvature Information for Fast Stochastic Search
Maximum Likelihood Blind Source Separation: A Context-Sensitive Generalization of ICA
A Convergence Proof for the Soft assign Quadratic Assignment Algorithm
Second-order Learning Algorithm with Squared Penalty Term
Monotonicity Hints
Training Algorithms for Hidden Markov Models using Entropy Based Distance Models
Clustering Sequences with Hidden Markov Models
Fast Network Pruning and Feature Extraction by using the Unit-OBS Algorithm
Separating Style and Content
Early Brain Damage
Part V Implementation
VLSI Implementation of Cortical Visual Motion Detection Using an Analog Neural Computer
A Spike Based Learning Neuron in Analog VLSI
An Analog Implementation of the Constant Average Statistics Constraint For Sensor Calibration
Analog VLSI Circuits for Attention-Based, Visual Tracking
Dynamically Adaptable CMOS Winner-Take-AII Neural Network
An Adaptive WTA using Floating Gate Technology
A Micropower Analog VLSI HMM State Decoder for Wordspotting
Bangs, Clicks, Snaps, Thuds and Whacks: An Architecture for Acoustic Transient Processing
A Silicon Model of Amplitude Modulation Detection in the Auditory Brainstem
Part VI Speech
Handwriting and Signal Processing Dynamic Features for Visual Speechreading: A Systematic Comparison
Blind Separation of Delayed and Convolved Sources
A Constructive RBF Network for Writer Adaptation
A New Approach to Hybrid HMM/ANN Speech Recognition using Mutual Information Neural Networks
Neural Network Modeling of Speech and Music Signals
A Constructive Learning Algorithm for Discriminant Tangent Models
Dual Kalman Filtering Methods for Nonlinear Prediction, Smoothing and Estimation
Ensemble Methods for Phoneme Classification
Effective Training of a Neural Network Character Classifier for Word Recognition
Part VII Visual Processing
Viewpoint Invariant Face Recognition using Independent Component Analysis and Attractor Networks
Learning Temporally Persistent Hierarchical Representations
Edges are the "Independent Components" of Natural Scenes
Compositionality, MDL Priors, and Object Recognition
Learning Appearance Based Models: Mixtures of Second Moment Experts
Spatial Decorrelation in Orientation Tuned Cortical Cells
Spatiotemporal Coupling and Scaling of Natural Images and Human Visual Sensitivities
Selective Integration: A Model for Disparity Estimation
ARTEX: A Self-organizing Architecture for Classifying Image Regions
Contour Organisation with the EM Algorithm
Visual Cortex Circuitry and Orientation Tuning
Representing Face Images for Emotion Classification
Rapid Visual Processing using Spike Asynchrony
Interpreting Images by Propagating Bayesian Beliefs
Salient Contour Extraction by Temporal Binding in a Cortically-based Network
Part VIII Applications
An Orientation Selective Neural Network for Pattern Identification in Particle Detectors
Adaptive Access Control Applied to Ethernet Data
Predicting Lifetimes in Dynamically Allocated Memory
Multi-Task Learning for Stock Selection
The Neurothermostat: Predictive Optimal Control of Residential Heating Systems
Sequential Tracking in Pricing Financial Options using Model Based and Neural Network Approaches
A Comparison between Neural Networks and other Statistical Techniques for Modeling the Relationship between Tobacco and Alcohol and Cancer
Reinforcement Learning for Dynamic Channel Allocation in Cellular Telephone Systems
Spectroscopic Detection of Cervical Pre-Cancer through Radial Basis Function Networks
Interpolating Earth-science Data using RBF Networks and Mixtures of Experts
Multi-effect Decompositions for Financial Data Modeling
Part IX Control, Navigation and Planning
Multidimensional Triangulation and Interpolation for Reinforcement Learning
Efficient Nonlinear Control with Actor-Tutor Architecture
Local Bandit Approximation for Optimal Learning Problems
Reinforcement Learning for Mixed Open-loop and Closed-loop Control
Multi-Grid Methods for Reinforcement Learning in Controlled Diffusion Processes
Learning from Demonstration
Exploiting Model Uncertainty Estimates for Safe Dynamic Control Learning
Analytical Mean Squared Error Curves in Temporal Difference Learning
Learning Decision Theoretic Utilities through Reinforcement Learning
On-line Policy Improvement using Monte-Carlo Search
Analysis of Temporal-Difference Learning with Function Approximation
Approximate Solutions to Optimal Stopping Problems
Index of Authors
Keyword Index
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Terms of Use.
  • - Barnes & reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)