Due to an ever-decreasing supply in raw materials and stringent constraints on conventional energy sources, demand for lightweight, efficient and low cost structures has become crucially important in modern engineering design. This requires engineers to search for optimal and robust design options to address design problems that are often large in scale and highly nonlinear, making finding solutions challenging. In the past two decades, metaheuristic algorithms have shown promising power, efficiency and versatility in solving these difficult optimization problems.
This book examines the latest developments of metaheuristics and their applications in water, geotechnical and transport engineering offering practical case studies as examples to demonstrate real world applications. Topics cover a range of areas within engineering, including reviews of optimization algorithms, artificial intelligence, cuckoo search, genetic programming, neural networks, multivariate adaptive regression, swarm intelligence, genetic algorithms, ant colony optimization, evolutionary multiobjective optimization with diverse applications in engineering such as behavior of materials, geotechnical design, flood control, water distribution and signal networks. This book can serve as a supplementary text for design courses and computation in engineering as well as a reference for researchers and engineers in metaheursitics, optimization in civil engineering and computational intelligence.
- Provides detailed descriptions of all major metaheuristic algorithms with a focus on practical implementation
- Develops new hybrid and advanced methods suitable for civil engineering problems at all levels
- Appropriate for researchers and advanced students to help to develop their work
|Product dimensions:||5.98(w) x 9.02(h) x 1.01(d)|
About the Author
Xin-She Yang obtained his DPhil in Applied Mathematics from the University of Oxford. He then worked at Cambridge University and National Physical Laboratory (UK) as a Senior Research Scientist. He is currently a Reader at Middlesex University London, Adjunct Professor at Reykjavik University (Iceland) and Guest Professor at Xi’an Polytechnic University (China). He is an elected Bye-Fellow at Downing College, Cambridge University. He is also the IEEE CIS Chair for the Task Force on Business Intelligence and Knowledge Management, and the Editor-in-Chief of International Journal of Mathematical Modelling and Numerical Optimisation (IJMMNO).
Read an Excerpt
Metaheuristics in Water, Geotechnical and Transport Engineering
ELSEVIERCopyright © 2012 Elsevier Inc.
All right reserved.
Chapter OneOptimization and Metaheuristic Algorithms in Engineering
Centre for Mathematics and Scientific Computing, National Physical Laboratory, Teddington, UK
Optimization is everywhere, and thus it is an important paradigm with a wide range of applications. In almost all applications in engineering and industry, we are trying to optimize something—whether to minimize the cost and energy consumption or to maximize profit, output, performance, and efficiency. In reality, resources, time, and money are always limited; consequently, optimization is far more important in practice (Yang, 2010b; Yang and Koziel, 2011). The optimal use of available resources of any sort requires a paradigm shift in scientific thinking because most real-world applications have far more complicated factors and parameters to affect how the system behaves.
Contemporary engineering design is heavily based on computer simulations, which introduces additional difficulties to optimization. Growing demand for accuracy and ever-increasing complexity of structures and systems results in the simulation process being more and more time consuming. In many engineering fields, the evaluation of a single design can take as long as several days or even weeks. Any method that can speed up the simulation time and optimization process can thus save time and money.
For any optimization problem, the integrated components of the optimization process are the optimization algorithm, an efficient numerical simulator, and a realistic representation of the physical processes that we wish to model and optimize. This is often a time-consuming process, and in many cases, the computational costs are usually very high. Once we have a good model, the overall computation costs are determined by the optimization algorithms used for searching and the numerical solver used for simulation.
Search algorithms are the tools and techniques used to achieve optimality of the problem of interest. This search for optimality is complicated further by the fact that uncertainty is almost always present in the real world. Therefore, we seek not only the optimal design but also the robust design in engineering and industry. Optimal solutions, which are not robust enough, are not practical in reality. Suboptimal solutions or good robust solutions are often the choice in such cases.
Simulations are often the most time-consuming part. In many applications, an optimization process often involves evaluating objective function many times (with often thousands, hundreds of thousands, and even millions of configurations). Such evaluations often involve the use of extensive computational tools such as a computational fluid dynamics simulator or a finite element solver. Therefore, efficient optimization with an efficient solver is extremely important.
Optimization problems can be formulated in many ways. For example, the commonly used method of least squares is a special case of maximum-likelihood formulations. By far, the best-known formulation is to write a nonlinear optimization problem as
minimize fi(x), i = 1, 2, ..., M (1.1)
subject to the constraints
hj(x) = 0, j = 1, 2, ..., J (1.2)
gk(x) ≤ 0, k = 1, 2, ..., K (1.3)
where fi, hj, and gk are general nonlinear functions. Here, the design vector x = (x1, x2, ..., xn) can be continuous, discrete, or mixed in n-dimensional space. The functions fi are called objective or cost functions, and when M > 1, the optimization is multiobjective or multicriteria (Sawaragi et al., 1985; Yang, 2010b). It is possible to combine different objectives into a single objective, though multiobjective optimization can give far more information and insight into the problem. It is worth pointing out here that we write the problem as a minimization problem, but it can also be written as a maximization by simply replacing fi(x) by -fi(x).
When all functions are nonlinear, we are dealing with nonlinear constrained problems. In some special cases when fi, hj, gk are linear, the problem becomes linear, and we can use widely linear programming techniques such as the simplex method. When some design variables can take only discrete values (often integers), while other variables are real and continuous, the problem is of mixed type, which is often difficult to solve, especially for large-scale optimization.
A very special class of optimization is the convex optimization, which has guaranteed global optimality. Any optimal solution is also the global optimum, and most importantly, there are efficient algorithms of polynomial time to solve such problems (Conn et al., 2009). These efficient algorithms, such as the interior-point methods (Karmarkar, 1984), are widely used and have been implemented in many software packages.
1.2 Three Issues in Optimization
There are three main issues in the simulation-driven optimization and modeling, and they are the efficiency of an algorithm, the efficiency and accuracy of a numerical simulator, and the assignment of the right algorithms to the right problem. Despite their importance, there are no satisfactory rules or guidelines for such issues. Obviously, we try to use the most efficient algorithms available, but the actual efficiency of an algorithm depends on many factors such as the inner working of an algorithm, the information needed (such as objective functions and their derivatives), and implementation details. The efficiency of a solver is even more complicated, depending on the actual numerical methods used and the complexity of the problem of interest. As for choosing the right algorithms for the right problems, there are many empirical observations, but no agreed guidelines. In fact, there is no universally efficient algorithms for all types of problems. Therefore, the choice depends on many factors and is sometimes subject to the personal preferences of researchers and decision makers.
1.2.1 Efficiency of an Algorithm
An efficient optimizer is very important to ensure the optimal solutions are reachable. The essence of an optimizer is a search or optimization algorithm implemented correctly so as to carry out the desired search (though not necessarily efficient). It can be integrated and linked with other modeling components. There are many optimization algorithms in the literature, and no single algorithm is suitable for all problems, as dictated by the No Free Lunch Theorems (Wolpert and Macready, 1997).
Optimization algorithms can be classified in many ways, depending on the focus or the characteristics that we are trying to compare. Algorithms can be classified as gradient-based (or derivative-based) and gradient-free (or derivative-free). The classic methods of steepest descent and the Gauss–Newton methods are gradient based, as they use the derivative information in the algorithm, while the Nelder–Mead downhill simplex method (Nelder and Mead, 1965) is a derivative-free method because it uses only the values of the objective, not any derivatives.
Algorithms can also be classified as deterministic or stochastic. If an algorithm works in a mechanically deterministic manner without any random nature, it is called deterministic. For such an algorithm, it will reach the same final solution if we start with the same initial point. The hill-climbing and downhill simplex methods are good examples of deterministic algorithms. On the other hand, if there is some randomness in the algorithm, the algorithm will usually reach a different point every time it is run, even starting with the same initial point. Genetic algorithms and hill climbing with a random restart are good examples of stochastic algorithms.
Analyzing stochastic algorithms in more detail, we can single out the type of randomness that a particular algorithm is employing. For example, the simplest and yet often very efficient method is to introduce a random starting point for a deterministic algorithm. The well-known hill-climbing method with random restart is a good example. This simple strategy is both efficient in most cases and easy to implement in practice. A more elaborate way to introduce randomness to an algorithm is to use randomness inside different components of an algorithm, and in this case, we often call such algorithm heuristic or, more often, metaheuristic (Talbi, 2009; Yang, 2008, 2010b). A very good example is the popular genetic algorithms, which use randomness for crossover and mutation in terms of a crossover probability and a mutation rate. Here, heuristic means to search by trial and error, while metaheuristic is a higher level of heuristics. However, modern literature tends to refer to all new stochastic algorithms as metaheuristic. In this book, we will use metaheuristic to mean either. It is worth pointing out that metaheuristic algorithms are a hot research topic, and new algorithms appear almost yearly (Yang, 2008, 2010b).
From the mobility point of view, algorithms can be classified as local or global. Local search algorithms typically converge toward a local optimum, not necessarily (often not) the global optimum, and such algorithms are often deterministic and have no ability of escaping local optima. Simple hill climbing is an example. On the other hand, we always try to find the global optimum for a given problem, and if this global optimality is robust, it is often the best, though it is not always possible to find such global optimality. For global optimization, local search algorithms are not suitable. We have to use a global search algorithm. Modern metaheuristic algorithms in most cases are intended for global optimization, though the process is not always successful or efficient. A simple strategy such as hill climbing with random restart may change a local search algorithm into a global search. In essence, randomization is an efficient component for global search algorithms. In this chapter, we will provide a brief review of most metaheuristic optimization algorithms.
Excerpted from Metaheuristics in Water, Geotechnical and Transport Engineering Copyright © 2012 by Elsevier Inc. . Excerpted by permission of ELSEVIER. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
1. Optimization and Metaheuristic Algorithms in Engineering 2.Application of Soft Computing Methods in Water Resources Engineering (Hazi Mohammad Azamathulla)
3.Genetic Algorithms and Their Applications to Water resources Systems
4.Application of Hybrid HS-Solver Algorithm to the Solution of Groundwater Management Problems
5.Evolutionary Multi-objective Optimization of the Water Distribution Networks
6.Ant Colony Optimization for Parameters Estimating of Flood Frequency Distributions
7.Optimal Reservoir Operation for Irrigation Planning Using Swarm Intelligence Algorithm
8.Artificial Intelligence in Geotechnical Engineering: Applications, Modelling Aspects and Future Directions
9.Hybrid heuristic optimization methods in geotechnical engineering
10.Artificial neural network in geotechncial engineering: modelling and application issues
11.Geotechnical Applications of Bayesian Neural Networks
12.Linear and Tree-Based Genetic Programming for Solving Geotechnical Engineering Problems
13.A New Approach to Modelling the Behaviour of Geomaterials
14.Slope Stability analysis using Metaheuristics
15.Scheduling Transportation Networks and Reliability Analysis of Geostructures using Metaheuristics
16.Metaheuristic Applications in Highway and Rail Infrastructure Planning and Design: Implications to Energy and Environmental Sustainability
17.Multi-Objective Optimization of Delay and Stops in Traffic Signal Networks
18.An improved Hybrid Algorithm for Stochastic Bus-Network Design
19.Hybrid method and its application toward smart Pavement Management