Connectionistic Problem Solving: Computational Aspects of Biological Learning
1. 1 The problem and the approach The model developed here, which is actually more a collection of com­ ponents than a single monolithic structure, traces a path from relatively low-level neural/connectionistic structures and processes to relatively high-level animal/artificial intelligence behaviors. Incremental extension of this initial path permits increasingly sophisticated representation and processing strategies, and consequently increasingly sophisticated behavior. The initial chapters develop the basic components of the system at the node and network level, with the general goal of efficient category learning and representation. The later chapters are more concerned with the problems of assembling sequences of actions in order to achieve a given goal state. The model is referred to as connectionistic rather than neural, because, while the basic components are neuron-like, there is only limited commitment to physiological realism. Consequently the neuron-like ele­ ments are referred to as "nodes" rather than "neurons". The model is directed more at the behavioral level, and at that level, numerous concepts from animal learning theory are directly applicable to connectionis­ tic modeling. An attempt to actually implement these behavioral theories in a computer simulation can be quite informative, as most are only partially specified, and the gaps may be apparent only when actual­ ly building a functioning system. In addition, a computer implementation provides an improved capability to explore the strengths and limita­ tions of the different approaches as well as their various interactions.
1114812392
Connectionistic Problem Solving: Computational Aspects of Biological Learning
1. 1 The problem and the approach The model developed here, which is actually more a collection of com­ ponents than a single monolithic structure, traces a path from relatively low-level neural/connectionistic structures and processes to relatively high-level animal/artificial intelligence behaviors. Incremental extension of this initial path permits increasingly sophisticated representation and processing strategies, and consequently increasingly sophisticated behavior. The initial chapters develop the basic components of the system at the node and network level, with the general goal of efficient category learning and representation. The later chapters are more concerned with the problems of assembling sequences of actions in order to achieve a given goal state. The model is referred to as connectionistic rather than neural, because, while the basic components are neuron-like, there is only limited commitment to physiological realism. Consequently the neuron-like ele­ ments are referred to as "nodes" rather than "neurons". The model is directed more at the behavioral level, and at that level, numerous concepts from animal learning theory are directly applicable to connectionis­ tic modeling. An attempt to actually implement these behavioral theories in a computer simulation can be quite informative, as most are only partially specified, and the gaps may be apparent only when actual­ ly building a functioning system. In addition, a computer implementation provides an improved capability to explore the strengths and limita­ tions of the different approaches as well as their various interactions.
54.99 In Stock
Connectionistic Problem Solving: Computational Aspects of Biological Learning

Connectionistic Problem Solving: Computational Aspects of Biological Learning

by HAMPSON
Connectionistic Problem Solving: Computational Aspects of Biological Learning

Connectionistic Problem Solving: Computational Aspects of Biological Learning

by HAMPSON

Paperback(Softcover reprint of the original 1st ed. 1990)

$54.99 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

1. 1 The problem and the approach The model developed here, which is actually more a collection of com­ ponents than a single monolithic structure, traces a path from relatively low-level neural/connectionistic structures and processes to relatively high-level animal/artificial intelligence behaviors. Incremental extension of this initial path permits increasingly sophisticated representation and processing strategies, and consequently increasingly sophisticated behavior. The initial chapters develop the basic components of the system at the node and network level, with the general goal of efficient category learning and representation. The later chapters are more concerned with the problems of assembling sequences of actions in order to achieve a given goal state. The model is referred to as connectionistic rather than neural, because, while the basic components are neuron-like, there is only limited commitment to physiological realism. Consequently the neuron-like ele­ ments are referred to as "nodes" rather than "neurons". The model is directed more at the behavioral level, and at that level, numerous concepts from animal learning theory are directly applicable to connectionis­ tic modeling. An attempt to actually implement these behavioral theories in a computer simulation can be quite informative, as most are only partially specified, and the gaps may be apparent only when actual­ ly building a functioning system. In addition, a computer implementation provides an improved capability to explore the strengths and limita­ tions of the different approaches as well as their various interactions.

Product Details

ISBN-13: 9780817634506
Publisher: Birkh�user Boston
Publication date: 01/01/1990
Edition description: Softcover reprint of the original 1st ed. 1990
Pages: 278
Product dimensions: 6.10(w) x 9.25(h) x 0.02(d)

Table of Contents

1 Introduction.- 1.1 The problem and the approach.- 1.2 Adaptive problem solving.- 1.3 Starting with simple models.- 1.4 Sequential vs. parallel processing.- 1.5 Learning.- 1.6 Overview of the model.- 2 Node Structure and Training.- 2.1 Introduction.- 2.2 Node structure.- 2.3 Node training.- 2.4 Input order.- 2.5 Alternative LTU models.- 2.6 Continuous or multivalued valued features.- 2.7 Representing multivalued features.- 2.8 Excitation, inhibition and facilitation.- 3 Improving on Perceptron Training.- 3.1 Introduction.- 3.2 Perceptron training time complexity.- 3.3 Output-specific feature associability.- 3.4 Origin placement.- 3.5 Short-term weight modification.- 4 Learning and Using Specific Instances.- 4.1 Introduction.- 4.2 Focusing.- 4.3 Generalization vs. specific instance learning.- 4.4 Use of specific instances.- 5 Operator and Network Structure.- 5.1 Introduction.- 5.2 Network interconnections and association types.- 5.3 Multilayer networks.- 5.4 Recurrent connections.- 5.5 Forms of category representation.- 5.6 Space requirements.- 5.7 Goals as undistinguished features.- 6 Operator Training.- 6.1 Introduction.- 6.2 Operator training (disjunctive representation).- 6.3 OT extensions.- 6.4 OT results.- 6.5 Shared memory focusing.- 6.6 Behavioral results.- 6.7 Input-specific feature associability.- 7 Learned Evaluation and Sequential Credit Assignment.- 7.1 Introduction.- 7.2 Single action.- 7.3 Sequential action.- 7.4 Biological evaluation.- 7.5 The S-R model.- 7.6 Variations on the theme.- 7.7 Instrumental and classical conditioning.- 7.8 Results.- 8 Stimulus-Stimulus Associations and Parallel Search.- 8.1 Introduction.- 8.2 The S-S model.- 8.3 Backward search.- 8.4 Eco-world: an example domain.- 8.5 Forward search.- 9 Stimulus-Stimulus Discussion.- 9.1 Introduction.- 9.2 Biological relevance.- 9.3 A simple experiment.- 9.4 Drive and reward.- 9.5 Automatization of behavior.- 9.6 Parallel vs. sequential search.- 10 Stimulus-Goal Associations.- 10.1 Introduction.- 10.2 The S-G model.- 10.3 S-G discussion.- 10.4 General goal setting.- 11 Summary and Conclusions.- 12 Further Reading and Notes.- 13 Bibliography.- 14 Symbols and Abbreviations.- 15 Name Index.- 16 Subject Index.
From the B&N Reads Blog

Customer Reviews