Reviews in Computational Chemistry / Edition 2

Hardcover (Print)
Buy New
Buy New from
Used and New from Other Sellers
Used and New from Other Sellers
from $144.82
Usually ships in 1-2 business days
(Save 36%)
Other sellers (Hardcover)
  • All (8) from $144.82   
  • New (5) from $180.89   
  • Used (3) from $144.82   


This volume, like those prior to it, features chapters by experts in various fields of computational chemistry. Volume 27 covers brittle fracture, molecular detailed simulations of lipid bilayers, semiclassical bohmian dynamics, dissipative particle dynamics, trajectory-based rare event simulations, and understanding metal/metal electrical contact conductance from the atomic to continuum scales. Also included is a chapter on career opportunities in computational chemistry and an appendix listing the e-mail addresses of more than 2500 people in that discipline.


"Reviews in Computational Chemistry remains the most valuable reference to methods and techniques in computational chemistry."—JOURNAL OF MOLECULAR GRAPHICS AND MODELLING

"One cannot generally do better than to try to find an appropriate article in the highly successful Reviews in Computational Chemistry. The basic philosophy of the editors seems to be to help the authors produce chapters that are complete, accurate, clear, and accessible to experimentalists (in particular) and other nonspecialists (in general)."—JOURNAL OF THE AMERICAN CHEMICAL SOCIETY

Recent advances in ligand design methods, current issues in de novo molecular design, etc.

Read More Show Less

Editorial Reviews

From the Publisher
“Reviews in Computational Chemistry has been a valuable resource for researchers and students who are interested in entering a new field within computational science and engineering, who are looking to broaden their knowledge, or who are simply curious about new theories, trends and computational tools.”  (Struct Chem, 7 September 2011)
SciTech Book News
Describes computational chemistry tools useful for molecular design and other research applications.
SciTech Book News
Describes computational chemistry tools useful for molecular design and other research applications.
Scientific Computing World
...certainly a book that should be considered as an essential component for any complete natural science library.
Aimed at both the novice molecular modeler and the expert computational chemist, five chapters discuss how molecular modeling of peptidomimetics plays a key role in drug discovery (with examples of successful computer-aided drug design); thermodynamic perturbation and thermodynamic integration approaches in molecular dynamic simulations; and molecular modeling of carbohydrates, the best empirical force fields to use in molecular mechanics, and molecular shape as a useful quantitative descriptor. Annotation c. by Book News, Inc., Portland, Or.
In four lengthy chapters, tutorials and reviews cover how to obtain simple chemical insight and concepts from density functional theory calculations; strategies for modeling photochemical reactions and excited states; how to compute enthalpies of formation of molecules; and the history of the growth of computational chemistry in Canada (earlier volumes in the series provided similar histories for the US, Great Britain, and France). The preface pays tribute to the Quantum Chemistry Program Exchange (QCPE) and briefly discusses information sources for chemists. Annotation c. Book News, Inc., Portland, OR (
From The Critics
The field of computational chemistry is defined by Lipkowitz and Boyd (chemistry, Indiana U.-Purdue U. at Indiana) as those aspects of chemical research that are rendered practical by computers. In a volume dedicated to late colleagues, the editors compile tutorials in the areas of small molecule docking (as both a design and virtual screening tool) and scoring, protein-protein docking, spin-orbit coupling in molecules, and cellular automata models of aqueous solution systems. A compilation of books published on topics in this field is appended, augmented by a table of relevant journals and book series. The index denotes computer programs, databases, and journals. An ancillary website is available. These reviews have been ranked fourth in impact among serials in the field. Annotation c. Book News, Inc., Portland, OR (
Read More Show Less

Product Details

Meet the Author

Kenny B. Lipkowitz is a recently retired Professor of Chemistry from North Dakota State University.

Tom Cundari is Professor of Chemistry at the University of North Texas.

Read More Show Less

Read an Excerpt

Reviews in Computational Chemistry, Volume 19

John Wiley & Sons

Copyright © 2003

Kenny B. Lipkowitz, Raima Larter, Thomas R. Cundari
All right reserved.

ISBN: 0-471-23585-7

Chapter One


This chapter is written for the reader who would like to learn how
Monte Carlo methods are used to calculate thermodynamic properties of systems
at the atomic level, or to determine which advanced Monte Carlo methods
might work best in their particular application. There are a number of
excellent books and review articles on Monte Carlo methods, which are generally
focused on condensed phases, biomolecules or electronic structure theory.
The purpose of this chapter is to explain and illustrate some of the
special techniques that we and our colleagues have found to be particularly
well suited for simulations of nanodimensional atomic and molecular clusters.
We want to help scientists and engineers who are doing their first work in this
area to get off on the right foot, and also provide a pedagogical chapter for
those who are doing experimental work. By including examples of simulations
of some simple, yet representative systems, we provide the reader with some
data for direct comparison when writing their own code from scratch.

Although a number of Monte Carlo methods in current use will be
reviewed, this chapter is not meant to be comprehensive in scope.Monte Carlo
is a remarkably flexible class of numerical methods. So many versions of the
basic algorithms have arisen that we believe a comprehensive review would be
of limited pedagogical value. Instead, we intend to provide our readers with
enough information and background to allow them to navigate successfully
through the many different Monte Carlo techniques in the literature. This
should help our readers use existing Monte Carlo codes knowledgably, adapt
existing codes to their own purposes, or even write their own programs. We
also provide a few general recommendations and guidelines for those who are
just getting started with Monte Carlo methods in teaching or in research.

This chapter has been written with the goal of describing methods that
are generally useful. However, many of our discussions focus on applications
to atomic and molecular clusters (nanodimensional aggregates of a finite number
of atoms and/or molecules). We do this for two reasons:

1. A great deal of our own research has focused on such systems,
particularly the phase transitions and other structural transformations induced by
changes in a cluster's temperature and size, keeping an eye on how various
properties approach their bulk limits. The precise determination of thermodynamic
properties (such as the heat capacity) of a cluster type as a function of
temperature and size presents challenges that must be addressed when using
Monte Carlo methods to study virtually any system. For example, analogous
structural transitions can also occur in phenomena as disparate as the denaturation
of proteins. The modeling of these transitions presents similar
computational challenges to those encountered in cluster studies.

2. Although cluster systems can present some unique challenges, their
study is unencumbered by many of the technical issues regarding periodic
boundary conditions that arise when solids, liquids, surface adsorbates, and
solvated biomolecules and polymers are studied. These issues are addressed
well elsewhere, and can be thoroughly appreciated and mastered once
a general background in Monte Carlo methods is obtained from this chapter.

It should be noted that "Monte Carlo" is a term used in many fields of
science, engineering, statistics, and mathematics to mean entirely different
things. The one (and only) thing that all Monte Carlo methods have in common
is that they all use random numbers to help calculate something. What we
mean by "Monte Carlo" in this chapter is the use of random-walk processes
to draw samples from a desired probability function, thereby allowing one to
calculate integrals of the form [integral] dqf(q) [rho](q). The quantity [rho](q) is a normalized
probability density function that spans the space of a many-dimensional
variable q, and f(q) is a function whose average is of thermodynamic importance
and interest. This integral, as well as all other integrals in this chapter,
should be understood to be a definite integral that spans the entire domain of
q. Finally, we note that the inclusion of quantum effects through path-integral
Monte Carlo methods is not discussed in this chapter. The reader interested in
including quantum effects in Monte Carlo thermodynamic calculations is
referred elsewhere.


Monte Carlo simulations are widely used in the fields of chemistry, biology,
physics, and engineering in order to determine the structural and thermodynamic
properties of complex systems at the atomic level. Thermodynamic
averages of molecular properties can be determined from Monte Carlo methods,
as can minimum-energy structures. Let (f) represent the average value
of some coordinate-dependent property f(x), with x representing the 3N
Cartesian coordinates needed to locate all of the N atoms. In the canonical
ensemble (fixed N, V and T, with V the volume and T the absolute temperature),
averages of molecular properties are given by an average of f(x) over the
Boltzmann distribution

(f) = [integral] dxf(x)exp[-[beta]U(x)]/[integral]dx exp[-[beta]U(x)] [1]

where U(x) is the potential energy of the system, [beta] = 1/[k.sub.B]T, and [k.sub.B] is the
Boltzmann constant. If one can compute the thermodynamic average of
f(x) it is then possible to calculate various thermodynamic properties. In
the canonical ensemble it is most common to calculate E, the internal energy,
and [C.sub.v] , the constant-volume heat capacity (although other properties can be
calculated as well). For example, if we average U(x) over all possible configurations
according to Eq. [1], then E and [C.sub.v] are given by

E = 3N[k.sub.B]T/2 + (U) [2]

[C.sub.v] = 3N[k.sub.B]/2 + ([U.sup.2]) - [(U).sup.2]/([k.sub.B][T.sup.2]) [3]

The first term in each equation represents the contribution of kinetic energy,
which is analytically integrable. In the harmonic (low-temperature) limit, E
given by Eq. [2] will be a linear function of temperature and [C.sub.v] from Eq. [3]
will be constant, in accordance with the Equipartition Theorem. For a small
cluster of, say, 6 atoms, the integrals implicit in the calculation of Eqs. [1]
and [2] are already of such high dimension that they cannot be effectively computed
using Simpson's rule or other basic quadrature methods. For
larger clusters, liquids, polymers or biological molecules the dimensionality
is obviously much higher, and one typically resorts to either Monte Carlo,
molecular dynamics, or other related algorithms.

To calculate the desired thermodynamic averages, it is necessary to have
some method available for computation of the potential energy, either explicitly
(in the form of a function representing the interaction potential as in
molecular mechanics) or implicitly (in the form of direct quantum-mechanical
calculations). Throughout this chapter we shall assume that U is known or can
be computed as needed, although this computation is typically the most computationally
expensive part of the procedure (because U may need to be computed
many, many times). For this reason, all possible measures should be
taken to assure the maximum efficiency of the method used in the computation
of U.

Also, it should be noted that constraining potentials (which keep the
cluster components from straying too far from a cluster's center of mass)
are sometimes used. At finite temperature, clusters have finite vapor pressures,
and particular cluster sizes are typically unstable to evaporation. Introducing
a constraining potential enables one to define clusters of desired sizes.
Because the constraining potential is artificial, the dependence of calculated
thermodynamic properties on the form and the radius of the constraining
potential must be investigated on a case-by-case basis. Rather than diverting
the discussion from our main focus (Monte Carlo methods), we refer the
interested reader elsewhere for more details and references on the use of con-straining

Random-Number Generation: A Few Notes

Because generalized Metropolis Monte Carlo methods are based on
"random" sampling from probability distribution functions, it is necessary
to use a high-quality random-number generator algorithm to obtain reliable
results. A review of such methods is beyond the scope of this chapter,
but a few general considerations merit discussion.

Random-number generators do not actually produce random numbers.
Rather, they use an integer "seed" to initialize a particular "pseudorandom"
sequence of real numbers that, taken as a group, have properties that leave
them nearly indistinguishable from truly random numbers. These are conventionally
floating-point numbers, distributed uniformly on the interval (0,1). If
there is a correlation between seeds, a correlation may be introduced between
the pseudorandom numbers produced by a particular generator. Thus, the
generator should ideally be initialized only once (at the beginning of the random
walk), and not re-initialized during the course of the walk. The seed
should be supplied either by the user or generated arbitrarily by the program
4 Strategies for Monte Carlo Thermodynamic Calculations
using, say, the number of seconds since midnight (or some other arcane formula).
One should be cautious about using the "built-in" random-number
generator functions that come with a compiler for Monte Carlo integration
work because some of them are known to be of very poor quality. The
reader should always be sure to consult the appropriate literature and obtain
(and test) a high-quality random-number generator before attempting to write
and debug a Monte Carlo program.

The Generalized Metropolis Monte Carlo Algorithm

The Metropolis Monte Carlo (MMC) algorithm is the single most widely
used method for computing thermodynamic averages. It was originally developed
by Metropolis et al. and used by them to simulate the freezing transition
for a two-dimensional hard-sphere fluid. However, Monte Carlo methods can
be used to estimate the values of multidimensional integrals in whatever context
they may arise. Although Metropolis et al. did not present their algorithm
as a general-utility method for numerical integration, it soon became
apparent that it could be generalized and applied to a variety of situations.
The core of the MMC algorithm is the way in which it draws samples from
a desired probability distribution function. The basic strategies used in
MMC can be generalized so as to apply to many kinds of probability functions
and in combination with many kinds of sampling strategies. Some authors
refer to the generalized MMC algorithm simply as "Metropolis sampling,"
while others have referred to it as the M(RT) method in honor of the five
authors of the original paper (Metropolis, the Rosenbluths, and the Tellers).
We choose to call this the generalized Metropolis Monte Carlo (gMMC) method,
and we will always use the term MMC to refer strictly to the combination
of methods originally presented by Metropolis et al.

In the literature of numerical analysis, gMMC is classified as an importance
sampling technique. Importance sampling methods generate configurations
that are distributed according to a desired probability function rather
than simply picking them at random from a uniform distribution. The probability
function is chosen so as to obtain improved convergence of the properties
of interest. gMMC is a special type of importance sampling method which
asymptotically (i.e., in the limit that the number of configurations becomes
large) generates states of a system according to the desired probability distribution.
This probability function is usually (but not always) the actual
probability distribution function for the physical system of interest. Nearly
all statistical-mechanical applications of Monte Carlo techniques require the
use of importance sampling, whether gMMC or another method is used (alternatively,
"stratified sampling" is sometimes an effective approach).
gMMC is certainly the most widely used importance sampling method.

In the gMMC algorithm successive configurations of the system are
generated to build up a special kind of random walk called a Markov
. The random walk visits successive configurations, where each
configuration's location depends on the configuration immediately preceding
it in the chain. The gMMC algorithm establishes how this can be done so as
to asymptotically generate a distribution of configurations corresponding to
the probability density function of interest, which we denote as [rho](q).

We define K([q.sub.j] -> [q.sub.j) to be the conditional probability that a
configuration at [q.sub.i] will be brought to [q.sub.j] in the next step of the random walk. This conditional
probability is sometimes called the "transition rate." The probability
of moving from q to [q'] (where q and [q'] are arbitrarily chosen configurations
somewhere in the available domain) is therefore given by P(q -> [q'):

P(q -> [q') = K(q -> [q')[rho](q) [4]

For the system to evolve toward a unique limiting distribution, we must place
a constraint on P(q -> [q']). The gMMC algorithm achieves the desired limiting
behavior by requiring that, on the average, a point is just as likely to move
from q to [q'] as it is to move in the reverse direction, namely, that
P(q -> [q') = P([q'] -> q). This likelihood can be achieved only if the walk is
ergodic (an ergodic walk eventually visits all configurations when started
from any given configuration) and if it is aperiodic (a situation in which no
single number of steps will generate a return to the initial configuration).
This latter requirement is known as the "detailed balance" or the "microscopic
reversibility" condition:

K(q -> [q'])[rho](q) = K([q'] -> q)[rho]([q']) [5]

Satisfying the detailed balance condition ensures that the configurations
generated by the gMMC algorithm will asymptotically be distributed according
to [rho](q).

The transition rate may be written as a product of a trial probability
and an acceptance probability A

K([q.sub.i] -> [q.sub.j]) = [PI]([q.sub.i] -> [q.sub.j])A([q.sub.i] -> [q.sub.j]) [6]

where [PI] can be taken to be any normalized distribution that asymptotically
spans the space of all possible configurations, and A is constructed so that
Eq. [5] is satisfied for a particular choice of [PI].


Excerpted from Reviews in Computational Chemistry, Volume 19

Copyright © 2003 by Kenny B. Lipkowitz, Raima Larter, Thomas R. Cundari.
Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

1. Brittle Fracture: From Elasticity Theory to Atomistic Simulations (Stefano Giordano, Alessandro Mattoni, and Luciano Colombo).


Essential Continuum Elasticity Theory.

Conceptual Layout.

The Concept of Strain.

The Concept of Stress.

The Formal Structure of Elasticity Theory.

Constitutive Equations.

The Isotropic and Homogeneous Elastic Body.

Governing Equations of Elasticity and Border Conditions.

Elastic Energy.

Microscopic Theory of Elasticity.

Conceptual Layout.

Triangular Lattice with Central Forces Only.

Triangular Lattice with Two-Body and Three-Body Interactions.

Interatomic Potentials for Solid Mechanics.

Atomic-Scale Stress.

Linear Elastic Fracture Mechanics.

Conceptual Layout.

Stress Concentration.

The Griffith Energy Criterion.

Opening Modes and Stress Intensity Factors.

Some Three-Dimensional Configurations.

Elastic Behavior of Multi Fractured Solids.

Atomistic View of Fracture.

Atomistic Investigations on Brittle Fracture.

Conceptual Layout.

Griffith Criterion for Failure.

Failure in Complex Systems.

Stress Shielding at Crack-Tip.


Appendix: Notation.


2. Dissipative Particle Dynamics (Igor V. Pivkin, Bruce Caswell, and George Em Karniadakis).


Fundamentals of DPD.

Mathematical Formulation.

Units in DPD.

Thermostat and Schmidt Number.

Integration Algorithms.

Boundary Conditions.

Extensions of DPD.

DPD with Energy Conservation.

Fluid Particle Model.

DPD for Two-Phase Flows.

Other Extensions.


Polymer Solutions and Melts.

Binary Mixtures.

Amphiphilic Systems.

Red Cells in Microcirculation.



3. Trajectory-Based Rare Event Simulations (Peter G. Bolhuis and Christoph Dellago).


Simulation of Rare Events.

Rare Event Kinetics from Transition State Theory.

The Reaction Coordinate Problem.

Accelerating Dynamics.

Trajectory-Based Methods.

Outline of the Chapter.

Transition State Theory.

Statistical Mechanical Definitions.

Rate Constants.

Rate Constants from Transition State Theory.

Variational TST.

The Harmonic Approximation.

Reactive Flux Methods.

The Bennett–Chandler Procedure.

The Effective Positive Flux.

The Ruiz–Montero–Frenkel–Brey Method.

Transition Path Sampling.

Path Probability.

Order Parameters.

Sampling the Path Ensemble.

Shooting Move.

Sampling Efficiency.

Biasing the Shooting Point.

Aimless Shooting.

Stochastic Dynamics Shooting Move.

Shifting Move.

Flexible Time Shooting.

Which Shooting Algorithm to Choose?

The Initial Pathway.

The Complete Path Sampling Algorithm.

Enhancement of Sampling by Parallel Tempering.

Multiple-State TPS.

Transition Path Sampling Applications.

Computing Rates with Path Sampling.

The Correlation Function Approach.

Transition Interface Sampling.

Partial Path Sampling.

Replica Exchange TIS or Path Swapping.

Forward Flux Sampling.


Discrete Path Sampling.

Minimizing the Action.

Nudged Elastic Band.

Action-Based Sampling.

Transition Path Theory and the String Method.

Identifying the Mechanism from the Path Ensemble.

Reaction Coordinate and Committor.

Transition State Ensemble and Committor Distributions.

Genetic Neural Networks.

Maximum Likelihood Estimation.

Conclusions and outlook.



4. Understanding Metal/Metal Electrical Contact Conductance from the Atomic to Continuum Scales (Douglas L. Irving).


Factors That Influence Contact Resistance.

Surface Roughness.

Local Heating.

Intermixing and Interfacial Contamination.

Dimensions of Contacting Asperities.

Computational Considerations.

Atomistic Methods.

Calculating Conductance of Nanoscale Asperities.

Hybrid Multiscale Methods.

Characterization of Defected Atoms.

Selected Case Studies.

Conduction Through Metallic Nanowires.

Multiscale Methods Applied to Metal/Metal Contacts.

Concluding Remarks.



5. Molecular Detailed Simulations of Lipid Bilayers (Max L. Berkowitz and James T. Kindt).


Membrane Simulation Methodology.

Force Fields.

Choice of the Ensemble.

Verification of the Force Field.

Monte Carlo Simulation of Lipid Bilayers.

Detailed Simulations of Bilayers Containing Lipid Mixtures.



6. Semiclassical Bohmian Dynamics (Sophya Garashchuk, Vitaly Rassolov, and Oleg Prezhdo).


The Formalism and Its Features.

The Trajectory Formulation.

Features of the Bohmian Formulation.

The Classical Limit of the Schrödinger Equation and the Semiclassical Regime of Bohmian Trajectories.

Using Quantum Trajectories in Dynamics of Chemical Systems.

Bohmian Quantum-Classical Dynamics.

Mean-Field Ehrenfest Quantum-Classical Dynamics.

Quantum-Classical Coupling via Bohmian Particles.

Numerical Illustration of the Bohmian Quantum-Classical Dynamics.

Properties of the Bohmian Quantum-Classical Dynamics.

Hybrid Bohmian Quantum-Classical Phase–Space Dynamics.

The Independent Trajectory Methods.

The Derivative Propagation Method.

The Bohmian Trajectory Stability Approach. Calculation of Energy Eigenvalues by Imaginary Time Propagation.

Bohmian Mechanics with Complex Action.

Dynamics with the Globally Approximated Quantum Potential (AQP).

Global Energy-Conserving Approximation of the Nonclassical Momentum.

Approximation on Subspaces or Spatial Domains.

Nonadiabatic Dynamics.

Toward Reactive Dynamics in Condensed Phase.

Stabilization of Dynamics by Balancing Approximation Errors.

Bound Dynamics with Tunneling.



Appendix A: Conservation of Density within a Volume Element.

Appendix B: Quantum Trajectories in Arbitrary Coordinates.

Appendix C: Optimal Parameters of the Linearized Momentum on Spatial Domains in Many Dimensions.


7. Prospects for Career Opportunities in Computational Chemistry (Donald B. Boyd).

Introduction and Overview.

Methodology and Results.

Proficiencies in Demand.


An Aside: Economics 101.




Appendix: List of Computational Molecular Scientists.

Subject Index.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Terms of Use.
  • - Barnes & reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)