A Connectionist Machine for Genetic Hillclimbing / Edition 1

A Connectionist Machine for Genetic Hillclimbing / Edition 1

by David Ackley
ISBN-10:
089838236X
ISBN-13:
9780898382365
Pub. Date:
08/31/1987
Publisher:
Springer US
ISBN-10:
089838236X
ISBN-13:
9780898382365
Pub. Date:
08/31/1987
Publisher:
Springer US
A Connectionist Machine for Genetic Hillclimbing / Edition 1

A Connectionist Machine for Genetic Hillclimbing / Edition 1

by David Ackley

Hardcover

$109.99 Current price is , Original price is $109.99. You
$109.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Overview

In the "black box function optimization" problem, a search strategy is required to find an extremal point of a function without knowing the structure of the function or the range of possible function values. Solving such problems efficiently requires two abilities. On the one hand, a strategy must be capable of learning while searching: It must gather global information about the space and concentrate the search in the most promising regions. On the other hand, a strategy must be capable of sustained exploration: If a search of the most promising region does not uncover a satisfactory point, the strategy must redirect its efforts into other regions of the space. This dissertation describes a connectionist learning machine that produces a search strategy called shastic iterated genetic hillclimbing (SIGH). Viewed over a short period of time, SIGH displays a coarse-to-fine searching strategy, like simulated annealing and genetic algorithms. However, in SIGH the convergence process is reversible. The connectionist implementation makes it possible to diverge the search after it has converged, and to recover coarse-grained information about the space that was suppressed during convergence. The successful optimization of a complex function by SIGH usually involves a series of such converge/diverge cycles.

Product Details

ISBN-13: 9780898382365
Publisher: Springer US
Publication date: 08/31/1987
Series: The Springer International Series in Engineering and Computer Science , #28
Edition description: 1987
Pages: 260
Product dimensions: 6.10(w) x 9.25(h) x 0.03(d)

Table of Contents

1. Introduction.- 1.1. Satisfying hidden strong constraints.- 1.2. Function optimization.- 1.2.1. The methodology of heuristic search.- 1.2.2. The shape of function spaces.- 1.3. High-dimensional binary vector spaces.- 1.3.1. Graph partitioning.- 1.4. Dissertation overview.- 1.5. Summary.- 2. The model.- 2.1. Design goal: Learning while searching.- 2.1.1. Knowledge representation.- 2.1.2. Point-based search strategies.- 2.1.3. Population-based search strategies.- 2.1.4. Combination rules.- 2.1.5. Election rules.- 2.1.6. Summary: Learning while searching.- 2.2. Design goal: Sustained exploration.- 2.2.1. Searching broadly.- 2.2.2. Convergence and divergence.- 2.2.3. Mode transitions.- 2.2.4. Resource allocation via taxation.- 2.2.5. Summary: Sustained exploration.- 2.3. Connectionist computation.- 2.3.1. Units and links.- 2.3.2. A three-state shastic unit.- 2.3.3. Receptive fields.- 2.4. Shastic iterated genetic hillclimbing.- 2.4.1. Knowledge representation in SIGH.- 2.4.2. The SIGH control algorithm.- 2.4.3. Formal definition.- 2.5. Summary.- 3. Empirical demonstrations.- 3.1. Methodology.- 3.1.1. Notation.- 3.1.2. Parameter tuning.- 3.1.3. Non-termination.- 3.2. Seven algorithms.- 3.2.1. Iterated hillclimbing—steepest ascent (IHC-SA).- 3.2.2. Iterated hillclimbing—next ascent (IHC-NA).- 3.2.3. Shastic hillclimbing (SHC).- 3.2.4. Iterated simulated annealing (ISA).- 3.2.5. Iterated genetic search—Uniform combination (IGS-U).- 3.2.6. Iterated genetic search—Ordered combination (IGS-O).- 3.2.7. Shastic iterated genetic hillclimbing (SIGH).- 3.3. Six functions.- 3.3.1. A linear space—“One Max”.- 3.3.2. A local maximum—“Two Max”.- 3.3.3. A large local maximum—“Trap”.- 3.3.4. Fine-grained local maxima—“Porcupine”.- 3.3.5. Flat areas—“Plateaus”.- 3.3.6. A combination space—“Mix”.- 4. Analytic properties.- 4.1. Problem definition.- 4.2. Energy functions.- 4.3. Basic properties of the learning algorithm.- 4.3.1. Motivating the approach.- 4.3.2. Defining reinforcement signals.- 4.3.3. Defining similarity measures.- 4.3.4. The equilibrium distribution.- 4.4. Convergence.- 4.5. Divergence.- 5. Graph partitioning.- 5.1. Methodology.- 5.1.1. Problems.- 5.1.2. Algorithms.- 5.1.3. Data collection.- 5.1.4. Parameter tuning.- 5.2. Adding a linear component.- 5.3. Experiments on random graphs.- 5.4. Experiments on multilevel graphs.- 6. Related work.- 6.1. The problem space formulation.- 6.2. Search and learning.- 6.2.1. Learning while searching.- 6.2.2. Symbolic learning.- 6.2.3. Hillclimbing.- 6.2.4. Shastic hillclimbing and simulated annealing.- 6.2.5. Genetic algorithms.- 6.3. Connectionist modelling.- 6.3.1. Competitive learning.- 6.3.2. Back propagation.- 6.3.3. Boltzmann machines.- 6.3.4. Shastic iterated genetic hillclimbing.- 6.3.5. Harmony theory.- 6.3.6. Reinforcement models.- 7. Limitations and variations.- 7.1. Current limitations.- 7.1.1. The problem.- 7.1.2. The SIGH model.- 7.2. Possible variations.- 7.2.1. Exchanging parameters.- 7.2.2. Beyond symmetric connections.- 7.2.3. Simultaneous optimization.- 7.2.4. Widening the bottleneck.- 7.2.5. Temporal credit assignment.- 7.2.6. Learning a function.- 8. Discussion and conclusions.- 8.1. Stability and change.- 8.2. Architectural goals.- 8.2.1 High potential parallelism.- 8.2.2 Highly incremental.- 8.2.3 “Generalized Hebbian” learning.- 8.2.4 Unsupervised learning.- 8.2.5 “Closed loop” interactions.- 8.2.6 Emergent properties.- 8.3. Discussion.- 8.3.1 The processor/memory distinction.- 8.3.2 Physical computation systems.- 8.3.3 Between mind and brain.- 8.4. Conclusions.- 8.4.1. Recapitulation.- 8.4.2. Contributions.- References.
From the B&N Reads Blog

Customer Reviews