Knowledge-Based Neurocomputing

Knowledge-Based Neurocomputing

Paperback

$48.00
View All Available Formats & Editions

Product Details

ISBN-13: 9780262528733
Publisher: MIT Press
Publication date: 02/04/2000
Series: The MIT Press
Pages: 502
Product dimensions: 7.90(w) x 9.90(h) x 1.20(d)
Age Range: 18 Years

Table of Contents

Preface and Acknowledgments xii
Contributors xiii
Knowledge-Based Neurocomputing: Past, Present, and Future
1(26)
The Past
1(3)
The Present
4(17)
A Taxonomy
4(7)
Overview of the Book
11(1)
Overview by Chapters
12(9)
The Future
21(6)
Architectures and Techniques for Knowledge-Based Neurocomputing
27(36)
The Knowledge-Data Trade-Off
27(2)
Foundations for Knowledge-Based Neurocomputing
29(8)
Architectures for Neurosymbolic Integration
29(5)
Knowledge-Based Neurocomputing: A Redefinition
34(2)
KBN and the Bias-Variance Trade-Off
36(1)
Techniques for Building Prior Knowledge into Neural Networks
37(8)
Knowledge-Intensive or Translational Techniques
37(1)
Knowledge-Primed or Hint-Based Techniques
38(6)
Knowledge-Free or Search-Based Techniques
44(1)
A Metalevel Architecture for Knowledge-Based Neurocomputing
45(7)
Overview of Scandal
46(1)
Strategies for Knowledge Utilization
46(1)
Experiments
47(4)
Summary of Findings
51(1)
Open Research Issues
52(11)
Symbolic Knowledge Representation in Recurrent Neural Networks: Insights from Theoretical Models of Computation
63(54)
Introduction
63(4)
Why Neural Networks?
63(1)
Theoretical Aspects of Neural Networks
64(1)
What Kind of Architecture Is Appropriate?
64(1)
Recurrent Networks and Models of Computation
65(1)
Knowledge Representation and Acquisition
66(1)
Are Neural Networks Black Boxes?
66(1)
Overcoming the Bias/Variance Dilemma
66(1)
Representation of Symbolic Knowledge in Neural Networks
67(1)
Importance of Knowledge Extraction
67(1)
Significance of Prior Knowledge
68(1)
Neural Networks for Knowledge Refinement
68(1)
Computational Models as Symbolic Knowledge
68(5)
A Hierarchy of Automata and Languages
68(1)
Finite-State Automata
69(2)
Subclasses of Finite-State Automata
71(1)
Push-Down Automata
72(1)
Turing Machines
73(1)
Summary
73(1)
Mapping Automata into Recurrent Neural Networks
73(10)
Preliminaries
73(1)
DFA Encoding Algorithm
74(2)
Stability of the DFA Representation
76(1)
Simulations
77(3)
Scaling Issues
80(1)
DFA States with Large Indegree
81(1)
Comparison with Other Methods
82(1)
Extension to Fuzzy Domains
83(7)
Preliminaries
83(1)
Crisp Representation of Fuzzy Automata
84(2)
Fuzzy FFA Representation
86(4)
Learning Temporal Patterns with Recurrent Neural Networks
90(5)
Motivation
90(2)
Learning Algorithms
92(1)
Input Dynamics
92(1)
Real-Time On-Line Training Algorithm
92(1)
Training Procedure
93(1)
Deterioration of Generalization Performance
94(1)
Learning Long-Term Dependencies
94(1)
Extraction of Rules from Recurrent Neural Networks
95(7)
Cluster Hypothesis
95(1)
Extraction Algorithm
96(2)
Example of DFA Extraction
98(1)
Selection of DFA Models
98(2)
Controversy and Theoretical Foundations
100(2)
Recurrent Neural Networks for Knowledge Refinement
102(3)
Introduction
102(1)
Variety of Inserted Rules
103(2)
Summary and Future Research Directions
105(12)
A Tutorial on Neurocomputing of Structures
117(36)
Introduction
117(2)
Basic Concepts
119(2)
Representing Structures in Neural Networks
121(3)
The RAAM Family
121(3)
From Graph Representation to Graph Transductions
124(2)
Neural Graph Transductions
126(1)
Recursive Neurons
127(3)
Learning Algorithms
130(5)
Backpropagation through Structure
130(1)
Extension of Real-Time Recurrent Learning
131(1)
Recursive Cascade Correlation
132(2)
Extension of Neural Trees
134(1)
Cyclic Graphs
135(2)
Learning with Cycles
137(4)
Backpropagation
137(2)
Real-Time
139(2)
Computational Power
141(4)
Tree Grammars and Tree Automata
142(1)
Computational Results
142(2)
Function Approximation
144(1)
Complexity Issues
145(3)
Representing FRAO as Boolean Functions
145(1)
Upper Bounds
146(1)
A Lower Bound on the Node Complexity
147(1)
Bounds on Learning
147(1)
Conclusions
148(5)
Structural Learning and Rule Discovery
153(54)
Introduction
153(1)
Structural Learning Methods
154(3)
Addition of a Penalty Term
155(1)
Deletion of Unnecessary Units
156(1)
Deletion of Unnecessary Connections
156(1)
Constructive Learning
157(1)
Structural Learning with Forgetting
157(5)
Learning with Forgetting
157(1)
Learning with Hidden Units Clarification
158(1)
Learning with Selective Forgetting
158(1)
Procedure of SLF
158(1)
Extension to Recurrent Networks
159(1)
Interpretation of Forgetting
159(1)
Determination of the Amount of Decay
160(1)
Model Selection
161(1)
Discovery of a Boolean Function
162(3)
Classification of Irises
165(1)
Discovery of Recurrent Networks
166(2)
Prediction of Time Series
168(9)
Recurrent Networks for Time Series
168(2)
Prediction Using Jordan Networks
170(2)
Prediction Using Buffer Networks
172(5)
Discussion
177(1)
Adaptive Learning
177(1)
Methods of Rule Extraction/Discovery
178(4)
Rule Discovery by SLF
182(1)
Classification of Mushrooms
183(1)
MONK's Problems
184(2)
Modular Structured Networks
186(14)
Module Formation and Learning of Modular Networks
188(2)
Boolean Functions
190(6)
Parity Problems
196(1)
Geometrical Transformation of Figures
197(1)
Discussion
198(2)
Toward Hybrid Intelligence
200(1)
Conclusion
201(6)
VL1 ANN: Transformation of Rules to Artificial Neural Networks
207(10)
Introduction
207(1)
Data Representation and Rule Syntax
208(1)
The VL1 ANN Algorithm for Rule Representation
209(3)
Example
212(1)
Related Work
213(2)
Summary
215(2)
Integration of Heterogeneous Sources of Partial Domain Knowledge
217(34)
Introduction
217(3)
Experts Integration: Domain or Range Transformation Dilemma
220(4)
Domain Transformation
220(2)
Range Transformation
222(2)
Incremental Single Expert Expansion
224(5)
The HDE Algorithm
225(3)
Embedding of Transformed Prior Knowledge
228(1)
Direct Integration of Prior Knowledge
229(1)
Multiple Experts Integration
229(5)
Cooperative Combination of Heterogeneous Experts
230(1)
Symbolic Integration Using Decision Trees
230(2)
Competitive Integration of Heterogeneous Experts
232(2)
Results
234(12)
The Two-Spirals Problem
234(6)
A Financial Advising Problem
240(6)
Conclusions
246(5)
Approximation of Differential Equations Using Neural Networks
251(40)
Motivation
251(1)
Local Approximation by Taylor Series
252(2)
Generalization of the Taylor Series Method
254(2)
Choice of Suitable Neural Network Approximators
256(5)
Modified Logistic Networks
257(1)
Radial Basis Function Networks
258(3)
Transformation of Differential Equations into Approximable Form
261(4)
Single-Step and Multi-Step Integration Procedures
265(8)
Modified Logistic Networks
268(1)
Radial Basis Function Networks
269(1)
Example: Second Order Differential Equation
270(3)
Training Designed Neural Networks with Observation Data
273(1)
Application to Forecasting of Chaotic Time Series
274(11)
Lorenz System
275(3)
One-Step-Ahead Forecasts
278(3)
Repeated Forecasts
281(4)
Conclusions
285(6)
Fynesse: A Hybrid Architecture for Self-Learning Control
291(34)
Introduction
291(2)
Essential Requirements
293(2)
Dynamical Systems
293(1)
Autonomous Learning
294(1)
Quality of Control
294(1)
Integration of a priori Knowledge
295(1)
Interpretation
295(1)
Fundamental Design Decisions
295(3)
Autonomously Learning to Control
296(1)
Representation of Controller Knowledge
297(1)
The Fynesse Architecture
298(5)
Main Concepts
299(1)
Degrees of Freedom
299(4)
Stepping into Fynesse
303(9)
The Learning Critic
303(5)
The Explicit Controller
308(4)
Control of a Chemical Plant
312(9)
Task Description
312(1)
A Nonlinear Controller
313(1)
Learning with Fynesse
314(1)
Self-Learning from Scratch
314(2)
Use of a priori Knowledge
316(1)
Interpretation of the Controller
317(4)
Summary
321(1)
Conclusions
321(4)
Data Mining Techniques for Designing Neural Network Time Series Predictors
325(44)
Introduction
325(2)
Direct Information Extraction Procedures
327(18)
Information Theory
327(4)
Dynamical System Analysis
331(4)
Stochastic Analysis
335(3)
An Illustrative Example
338(7)
Indirect Information Extraction Procedures
345(11)
Knowledge of Properties of the Target Function
345(1)
Non-Stationarity Detection
346(5)
An Illustrative Example
351(5)
Conclusions
356(13)
Extraction of Decision Trees from Artificial Neural Networks
369(15)
Introduction
369(1)
Extraction of Rules from Neural Networks
370(2)
ANN-DT Algorithm for Extraction of Rules from Artificial Neural Networks
372(7)
Training of the Artificial Neural Networks
373(1)
Induction of Rules from Sampled Points in the Feature Space
374(1)
Interpolation of Correlated Data
375(1)
Selection of Attribute and Threshold for Splitting
375(3)
Stopping Criteria and Pruning
378(1)
Splitting Procedures of ID3, C4.5, and CART
379(1)
Illustrative Examples
379(5)
Binary Classification of Points in a Circular Domain
380(1)
Characterization of Gas-Liquid Flow Patterns
381(1)
Solidification of ZnCl2
382(1)
Sine and Cosine Curves
382(1)
Abalone Data
383(1)
Sap Flow Data in Pine Trees
383(1)
Results
384(19)
Binary Classification of Points in a Circle
386(1)
Real World Classification Tasks: Examples 2 and 3
387(5)
Sine and Cosine Curves
392(2)
Abalone and Pine Data
394(1)
Discussion
394(3)
Qualitative Analysis
397(1)
Conclusions
398(5)
Extraction of Linguistic Rules from Data via Neural Networks and Fuzzy Approximation
403(16)
Introduction
403(1)
Fuzzy Rule Extraction Algorithm
404(6)
Gentamycin Dosage Problem
410(2)
Iris Flower Classification Problem
412(3)
Concluding Remarks
415(4)
Neural Knowledge Processing in Expert Systems
419(48)
Knowledge Representation in Expert Systems
419(11)
Expert Systems
420(2)
Explicit Knowledge Representation and Rule-Based Systems
422(2)
Implicit Knowledge Representation and Neural Networks
424(3)
Comparison of Rule-Based and Neural Expert Systems
427(3)
Neural Networks in Expert Systems
430(9)
Hybrid Systems
430(5)
Neural Expert Systems
435(4)
EXPSYS---An Example of a Neural Expert System
439(28)
Interface
440(2)
Neural Knowledge Base
442(6)
Inference Engine
448(2)
Explanation of Conclusions
450(4)
Example of Application
454(13)
Index 467

What People are Saying About This

Robert J. Marks II

Zurada's first volume is arguable the best neural network text ever written. Cloete and Zurada's Knowledge-Based Neurocomputing continues in this tradition of excellence. Clearly and precisely written, this volume belongs in the library of every neuro smith.

From the Publisher

Zurada's first volume is arguable the best neural network text ever written. Cloete and Zurada's Knowledge-Based Neurocomputing continues in this tradition of excellence. Clearly and precisely written, this volume belongs in the library of every neuro smith.—Robert J. Marks II, Department of Electrical Engineering, University of Washington, Seattle, and former Editor-in-Chief, IEEE Transaction on Neural Networks

Endorsement

Zurada's first volume is arguable the best neural network text ever written. Cloete and Zurada's Knowledge-Based Neurocomputing continues in this tradition of excellence. Clearly and precisely written, this volume belongs in the library of every neuro smith.—Robert J. Marks II, Department of Electrical Engineering, University of Washington, Seattle, and former Editor-in-Chief, IEEE Transaction on Neural Networks

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews