Advances in Neural Information Processing Systems 9: Proceedings of The 1996 Conference / Edition 1

Hardcover (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $4.50
Usually ships in 1-2 business days
(Save 95%)
Other sellers (Hardcover)
  • All (11) from $4.50   
  • New (5) from $4.50   
  • Used (6) from $4.60   
Close
Sort by
Page 1 of 1
Showing All
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$4.50
Seller since 2009

Feedback rating:

(132)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
new * fine, clean condition *** Author: Michael I. Contemporary Aboriginal and Torres Strait Islander Art [Paperback], et al., eds. *** Publisher: MIT

Ships from: long island city, NY

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
$10.32
Seller since 2006

Feedback rating:

(336)

Condition: New
1997 Hardcover Abbsolutely brand new! ! -No expedited, No international shipping available 262100657. Brand New;

Ships from: Chicago, IL

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
$24.57
Seller since 2005

Feedback rating:

(806)

Condition: New
1997 Hardcover New Tracking provided on most orders.

Ships from: Grand Rapids, MI

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
$70.43
Seller since 2007

Feedback rating:

(23309)

Condition: New
BRAND NEW

Ships from: Avenel, NJ

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
$77.86
Seller since 2009

Feedback rating:

(10080)

Condition: New
New Book. Shipped from US within 4 to 14 business days. Established seller since 2000

Ships from: Secaucus, NJ

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing All
Close
Sort by

Overview

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes neural networks and genetic algorithms, cognitive science, neuroscience and biology, computer science, AI,applied mathematics, physics, and many branches of engineering. Only about 30% of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. All of the papers presented appear in these proceedings.

Genetic algorithms & explicit search statistics, time series prediction using mixtures of experts

Read More Show Less

Editorial Reviews

Booknews
Some 150 papers explore a wide variety of topics in neural computation, including the design and analysis of learning algorithms, learning theory, neuroscience, vision, speech, and control theory. Applications are describes in such areas as operations research, finance, applied statistics, and experimental physics. Much attention is devoted to probabilistic methods, independent component analysis, and functional approximation in temporal difference learning. Annotation c. by Book News, Inc., Portland, Or.
Read More Show Less

Product Details

  • ISBN-13: 9780262100656
  • Publisher: MIT Press
  • Publication date: 5/5/1997
  • Series: Bradford Books Series
  • Edition description: First Edition
  • Edition number: 1
  • Pages: 1118
  • Product dimensions: 7.30 (w) x 10.10 (h) x 2.20 (d)

Meet the Author

Michael C. Mozer is a Professor in the Department of Computer Science and the Institute ofCognitive Science at the University of Colorado, Boulder. In 1990 he received the Presidential YoungInvestigator Award from the National Science Foundation.

Michael I. Jordan is Professor of Computer Science and of Statistics at the University ofCalifornia, Berkeley, and recipient of the ACM/AAAI Allen Newell Award.

Read More Show Less

Table of Contents

Preface
NIPS Committees
Reviewers
Part I Cognitive Science
Text-Based Information Retrieval Using Exponentiated Gradient Descent
Why did TD-Gammon Work?
Neural Models for Part-Whole Hierarchies
Part II Neuroscience
Temporal Low-Order Statistics of Natural Sounds
Reconstructing Stimulus Velocity from Neuronal Responses in Area MT
3D Object Recognition: A Model of View-Tuned Neurons
A Hierarchical Model of Visual Rivalry
Neural Network Models of Chemotaxis in the Nematode Caenorhabditis Elegans
Extraction of Temporal Features in the Electrosensory System of Weakly Electric Fish
A Neural Model of Visual Contour Integration
Learning Exact Patterns of Quasi-synchronization among Spiking Neurons from Data on Multi-unit Recordings
Complex-Cell Responses Derived from Center-Surround Inputs: The Surprising Power of Intradendritic Computation
Orientation Contrast Sensitivity from Long-range Interactions in Visual Cortex
Statistically Efficient Estimations Using Cortical Lateral Connections
An Architectural Mechanism for Direction-tuned Cortical Simple Cells: The Role of Mutual Inhibition
Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input
A Model of Recurrent Interactions in Primary Visual Cortex
Part III Theory
Neural Learning in Structured Parameter Spaces Natural Riemannian Gradient
For Valid Generalization, the Size of the Weights is More Important than the Size of the Network
Dynamics of Training
Multilayer Neural Networks: One or Two Hidden Layers?
Support Vector Regression Machines
Size of Multilayer Networks for Exact Learning: Analytic Approach
The Effect ofCorrelated Input Data on the Dynamics of Learning
Practical Confidence and Prediction Intervals
Statistical Mechanics of the Mixture of Experts
MLP Can Probably Generalize Much Better than VC-bounds Indicate
Radial Basis Function Networks and Complexity Regularization in Function Learning
An Apobayesian Relative of Winnow
Noisy Spiking Neurons with Temporal Coding have more Computational Power than Sigmoidal Neurons
On the Effect of Analog Noise in Discrete-Time Analog Computations
A Mean Field Algorithm for Bayes Learning in Large Feed-forward Neural Networks
Removing Noise in On-Line Search using Adaptive Batch Sizes
Are Hopfield Networks Faster than Conventional Computers?
Hebb Learning of Features based on their Information Content
The Generalisation Cost of RAMnets
Learning with Noise and Regularizers in Multilayer Neural Networks
A Variational Principle for Model-based Morphing
Online Learning from Finite Training Sets: An Analytical Case Study
Support Vector Method for Function Approximation, Regression Estimation, and Signal Processing
The Learning Dynamics of a Universal Approximator
Computing with Infinite Networks
Microscopic Equations in Rough Energy Landscape for Neural Networks
Time Series Prediction using Mixtures of Experts
Part IV Algorithms and Architecture
Genetic Algorithms and Explicit Search Statistics
Consistent Classification, Firm and Soft
Bayesian Model Comparison by Monte Carlo Chaining
Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo
Regression with Input-Dependent Noise: A Bayesian Treatment
GTM: A Principled Alternative to the Self-Organizing Map
The CONDENSATION Algorithm Conditional Density Propagation and Applications to Visual Tracking
Neural Clustering via Concave Minimization
Improving the Accuracy and Speed of Support Vector Machines
Estimating Equivalent Kernels for Neural Networks: A Data Perturbation Approach
Promoting Poor Features to Supervisors: Some Inputs Work Better as Outputs
Self-Organizing and Adaptive Algorithms for Generalized Eigen-Decomposition
Representation and Induction of Finite State Machines using Time-Delay Neural Networks
Solutions to the XOR Problem
Minimizing Statistical Bias with Queries
MIMIC: Finding Optima by Estimating Probability Densities
On a Modification to the Mean Field EM Algorithm in Factorial Learning
Softening Discrete Relaxation
Limitations of Self-organizing Maps for Vector Quantization and Multidimensional Scaling
Continuous Sigmoidal Belief Networks Trained using Slice Sampling
Adaptively Growing Hierarchical Mixtures of Experts
Balancing Between Bagging and Bumping
LSTM can Solve Hard Long Time Lag Problems
One-unit Learning Rules for Independent Component Analysis
Recursive Algorithms for Approximating Probabilities in Graphical Models
Combinations of Weak Classifiers
Hidden Markov Decision Trees
Unification of Information Maximization and Minimization
Unsupervised Learning by Convex and Conic Coding
ARC-LH: A New Adaptive Resampling Algorithm for Improving ANN Classifiers
Bayesian Unsupervised Learning of Higher Order Structure
Source Separation and Density Estimation by Faithful Equivariant SOM
NeuroScale: Novel Topographic Feature Extraction using RBF Networks
Decomposition, Ordered Classes and Incomplete Examples in Classification
Triangulation by Continuous Embedding
Time-Delay Neural Combining Neural Network Regression Estimates with Regularized Linear Weights
A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data
Learning Bayesian Belief Networks with Neural Network Estimators
Smoothing Regularizers for Projective Basis Function Networks
Competition Among Networks Improves Committee Performance
Adaptive On-line Learning in Changing Environments
Using Curvature Information for Fast Stochastic Search
Maximum Likelihood Blind Source Separation: A Context-Sensitive Generalization of ICA
A Convergence Proof for the Soft assign Quadratic Assignment Algorithm
Second-order Learning Algorithm with Squared Penalty Term
Monotonicity Hints
Training Algorithms for Hidden Markov Models using Entropy Based Distance Models
Clustering Sequences with Hidden Markov Models
Fast Network Pruning and Feature Extraction by using the Unit-OBS Algorithm
Separating Style and Content
Early Brain Damage
Part V Implementation
VLSI Implementation of Cortical Visual Motion Detection Using an Analog Neural Computer
A Spike Based Learning Neuron in Analog VLSI
An Analog Implementation of the Constant Average Statistics Constraint For Sensor Calibration
Analog VLSI Circuits for Attention-Based, Visual Tracking
Dynamically Adaptable CMOS Winner-Take-AII Neural Network
An Adaptive WTA using Floating Gate Technology
A Micropower Analog VLSI HMM State Decoder for Wordspotting
Bangs, Clicks, Snaps, Thuds and Whacks: An Architecture for Acoustic Transient Processing
A Silicon Model of Amplitude Modulation Detection in the Auditory Brainstem
Part VI Speech
Handwriting and Signal Processing Dynamic Features for Visual Speechreading: A Systematic Comparison
Blind Separation of Delayed and Convolved Sources
A Constructive RBF Network for Writer Adaptation
A New Approach to Hybrid HMM/ANN Speech Recognition using Mutual Information Neural Networks
Neural Network Modeling of Speech and Music Signals
A Constructive Learning Algorithm for Discriminant Tangent Models
Dual Kalman Filtering Methods for Nonlinear Prediction, Smoothing and Estimation
Ensemble Methods for Phoneme Classification
Effective Training of a Neural Network Character Classifier for Word Recognition
Part VII Visual Processing
Viewpoint Invariant Face Recognition using Independent Component Analysis and Attractor Networks
Learning Temporally Persistent Hierarchical Representations
Edges are the "Independent Components" of Natural Scenes
Compositionality, MDL Priors, and Object Recognition
Learning Appearance Based Models: Mixtures of Second Moment Experts
Spatial Decorrelation in Orientation Tuned Cortical Cells
Spatiotemporal Coupling and Scaling of Natural Images and Human Visual Sensitivities
Selective Integration: A Model for Disparity Estimation
ARTEX: A Self-organizing Architecture for Classifying Image Regions
Contour Organisation with the EM Algorithm
Visual Cortex Circuitry and Orientation Tuning
Representing Face Images for Emotion Classification
Rapid Visual Processing using Spike Asynchrony
Interpreting Images by Propagating Bayesian Beliefs
Salient Contour Extraction by Temporal Binding in a Cortically-based Network
Part VIII Applications
An Orientation Selective Neural Network for Pattern Identification in Particle Detectors
Adaptive Access Control Applied to Ethernet Data
Predicting Lifetimes in Dynamically Allocated Memory
Multi-Task Learning for Stock Selection
The Neurothermostat: Predictive Optimal Control of Residential Heating Systems
Sequential Tracking in Pricing Financial Options using Model Based and Neural Network Approaches
A Comparison between Neural Networks and other Statistical Techniques for Modeling the Relationship between Tobacco and Alcohol and Cancer
Reinforcement Learning for Dynamic Channel Allocation in Cellular Telephone Systems
Spectroscopic Detection of Cervical Pre-Cancer through Radial Basis Function Networks
Interpolating Earth-science Data using RBF Networks and Mixtures of Experts
Multi-effect Decompositions for Financial Data Modeling
Part IX Control, Navigation and Planning
Multidimensional Triangulation and Interpolation for Reinforcement Learning
Efficient Nonlinear Control with Actor-Tutor Architecture
Local Bandit Approximation for Optimal Learning Problems
Reinforcement Learning for Mixed Open-loop and Closed-loop Control
Multi-Grid Methods for Reinforcement Learning in Controlled Diffusion Processes
Learning from Demonstration
Exploiting Model Uncertainty Estimates for Safe Dynamic Control Learning
Analytical Mean Squared Error Curves in Temporal Difference Learning
Learning Decision Theoretic Utilities through Reinforcement Learning
On-line Policy Improvement using Monte-Carlo Search
Analysis of Temporal-Difference Learning with Function Approximation
Approximate Solutions to Optimal Stopping Problems
Index of Authors
Keyword Index
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)