Computational and Structural Approaches to Drug Discovery: Ligand-Protein Interactions

Computational and Structural Approaches to Drug Discovery: Ligand-Protein Interactions

by Robert Stroud
Computational methods impact all aspects of modern drug discovery and most notably these methods move rapidly from academic exercises to becoming drugs in clinical trials... This insightful book represents the experience and understanding of the global experts in the field and spotlights both the structural and medicinal chemistry aspects of drug design. The need to


Computational methods impact all aspects of modern drug discovery and most notably these methods move rapidly from academic exercises to becoming drugs in clinical trials... This insightful book represents the experience and understanding of the global experts in the field and spotlights both the structural and medicinal chemistry aspects of drug design. The need to 'encode' the factors that determine adsorption, distribution, metabolism, excretion and toxicology are explored, as they remain the critical issues in this area of research. This indispensable resource provides the reader with:
• A rich understanding of modern approaches to docking
• A comparison and critical evaluation of state-of-the-art methods
• Details on harnessing computational methods for both analysis and prediction
• An insight into prediction potencies and protocols for unbiased evaluations of docking and scoring algorithms
• Critical reviews of current fragment based methods with perceptive applications to kinases Addressing a wide range of uses of protein structures for drug discovery the Editors have created and essential reference for professionals in the pharmaceutical industry and moreover an indispensable core text for all graduate level courses covering molecular interactions and drug discovery.

Editorial Reviews

Chemistry World
The book has been carefully constructed as a series of essays, each addressing an important topic, and even better, topics, that there are many misunderstandings and misconceptions about.Each chapter is well referenced and serves as a good entry point into the literature.I would also strongly recommend this book to medicinal chemists leading project teams that use structural information.
From the Publisher
The book has been carefully constructed as a series of essays, each addressing an important topic, and even better, topics, that there are many misunderstandings and misconceptions about. Each chapter is well referenced and serves as a good entry point into the literature. I would also strongly recommend this book to medicinal chemists leading project teams that use structural information.

Product Details

Royal Society of Chemistry, The
Publication date:
RSC Biomolecular Sciences Series, #8
Product dimensions:
6.30(w) x 9.30(h) x 1.00(d)

Read an Excerpt

Computational and Structural Approaches to Drug Discovery

Ligand Protein Interactions

By Robert M Stroud, Janet Finer-Moore

The Royal Society of Chemistry

Copyright © 2008 The Royal Society of Chemistry
All rights reserved.
ISBN: 978-0-85404-365-1


Facing the Wall in Computationally Based Approaches to Drug Discovery


1.1 The Promise, and the Problem

It has been 36 years since the first protein structure was determined and the promise that structure could guide drug discovery was born.' Yet today even the ability to design a molecule that will target a nominated site and bind there still remains a tantalizing and intellectually enticing prospect. Most good lead compounds fail for reasons to do with lack of efficacy, toxicity, or interference with metabolic pathways. These properties, too, are ripe for computational evaluation before synthesis rather than after. These important areas are being addressed by computational approaches. But the real challenge for drug design is in the intellectual process of appreciating what is actually coded within macromolecular interactions of the target with small ligands, and the nature of specificity for metabolic enzymes that degrade the compound. This code is at the heart of biology, as it is of chemistry.

Let us first recognize the impact that just one, rather incremental, computational approach could have on the process. If it were possible, we might be able to take the crystal structure of a good lead compound and predict where introducing a specific substituent would increase affinity of the compound with even a 10% chance of moving affinity in the favorable direction. This could provide for successive improvements in affinity of the lead compound with just 10 alternative chemical synthetic modifications at each round, one of which would be an improvement (presuming the 10% chance of success). But there is a wall beyond which the probability of success becomes very low. If we could even recognize the cusp at which that will happen ahead of hitting 'the wall', alternative scaffolds could be introduced. This would save enormous resources in chemistry. The same is true for many steps, such as predicting metabolic fate or toxicity of compounds. A small improvement coded in computational screening can reduce downstream efforts by factors of hundreds to thousands. Hence the rational encoding of valid principles, and even empirical wisdom, into the process will remain a key to human health, and one of the highest intellectual challenges of the next decade.

Another area of drug discovery that is hungry for computational assistance is the personalization of medicine. Why is it that some persons have devastating, even life-threatening, responses to Celebrex or Vioxx, while others find them to be the best treatment? Genetic screening is required, but the translation of this into therapeutic strategy is paramount to the salvage of vast potential in the efforts that have already been made to develop potentially good drugs, that could be highly valuable, though for only a defined cohort of the population. If we could anticipate the patients who would respond badly we would save billion dollar markets for present and future drugs. In many ways we are at an exciting frontier in drug discovery, and a frontier where computational biology will need to take the lead.

Lead discovery in the traditional pharmaceutical industry typically involves screening of up to 5 million chemical entities in a few weeks using high-throughput methods, at a cost of $0.50-$ 1.00 for each compound. A screen of that local library could easily cost $5000000. There is consequently real motivation to maximize the return on investment by finding fast and more effective ways to screen. The time and money are compounded by the need to test lead compounds and derivatives for absorption, distribution, metabolism, and excretion (ADME) and toxicity. The spectrum of activity against off-target proteins is something rarely built into even computational screens, and is often left simply to trials on cells, animals, and then humans. With such a wealth of knowledge gathered over decades, why are we not further along in proscribing affinity and favorable properties?

We are in the adolescence of drug discovery. As even empirical rules emerge it is increasingly pertinent that those rules be incorporated into computational analysis, then understood, and then translated into rational rules and algorithms. Ultimately the computer will extract everything we learn in our laboratories, and translate that knowledge into 'wisdom'. Can we blame the industrial leader who shuns the knowledge-based approach for the tried-and-true screening approach to lead discovery? As the knowledge base explodes, the intellectual ability to understand that knowledge and translate it into improved drug discovery has to catch up and demonstrate its contribution to enhanced drug design. Unfortunately much of the knowledge of the binding affinities of series of compounds is sometimes protected, or otherwise unpublished in the knowledge bases of pharmaceutical companies. Another great challenge for the industry and for computational approaches is to release and collate the combined knowledge across the scientific community, for that could help chart the course between scaffolds or elaborations that we should avoid. Some of these areas are already in good hands, though are not yet established, recognized, or used routinely. Like the turn of the tide in an inland sea, the tide often turns long before the reverse flow begins.

The difference between a lead compound and a drug-like compound, or a drug, involves an increase in complexity, and can be encoded in a Venn diagram. This reminds us that optimizing a lead generally develops a compound of modest affinity for its target protein site, to one with drug-like properties, to one with increased affinity and specificity for the target. The requirements for each of these three types of compounds are inherently different. A tool compound for elaboration has different requirements to the drug-like compound, or the final drug. Likewise the context of use for a drug presents different requirements depending on clinical circumstances or the chosen mode of delivery. The classic paper of Lipinski et al. sought to address the frustration at Pfizer on finding that many hits were not optimizable. Thus new and separate criteria can now be described for leads, and for drug-like compounds, and these criteria can be phased in during development.

The challenge to make drug discovery a more rational process is balanced by the need to develop new drugs. The theorist enjoys some freedom to evolve and test ideas and produce algorithms for 'docking', 'fragment linking', scoring theoretical libraries, defining algorithms that predict ADME/Toxicity well, etc. The real thermometer of how well we can do relies on heavy commitment from the pharmaceutical industry to take the best ideas and algorithms of the moment and test them in a way that pushes them against the real criteria of making drugs. Even the most conservative chemists will readily take advantage of protein structure as a guide, but perhaps not as a driving plan for the next design, in part because of the limitations in its 'interpretation'. And the insights from a crystal structure are chemical and not well coordinated with perspectives of toxicity or other physiological properties. The balance of the 'tide' here reflects both the needs of the professional medicinal chemist to produce compounds, and the need to better generate understanding and rules that can at least bias toward improved properties. Thus the intellectual quest for rationality and the practical need to produce something quantifiable beautifully constrain each other into one of the grand challenges of our generation. There is a beautiful synergy of ideas and objective evaluation. The solution requires a much closer open connection between the vast experience of the pharmaceutical industry and the academic, since industry has less incentive for interpreting failure at any iteration, and academics lack the primary knowledge that is largely inaccessible.

1.2 Current Limitations in Structure-guided Lead Design

The structure of a lead compound bound to a drug target protein is powerful in directing alterations to the lead compound that produce greater affinity, but only up to a point. There seems to be a bounded space within which current schemes prevail, even when one accounts for protein flexibility in some manner. What constitutes the wall beyond which further alteration does not improve affinity, or 'efficiency' (the free energy divided by the number of atoms)? Surely if we understood the intermolecular code, we should be able to continue to optimize a lead, at least in affinity for its target, by logical addition of functional groups. In many ways getting past this 'wall' in optimization remains one of the most promising, rewarding, and challenging goals in pharmaceutical chemistry, and in chemical biology.

The process of interpreting the structure of a protein target with a bound fragment, and using that knowledge to design an improvement in binding affinity (arguably one of the easier parts of drug discovery) will be understood and encoded in time. There are some obvious weaknesses in our current analyses. Free energies of associations are not additive because binding entropy is not additive, and the contribution of entropy to the binding free energy is hard to encode, especially the entropy contributions from water molecules that are displaced from drug and site on drug binding to the site. This can be seen, for example, in mutational analysis that removes one, then another interaction. In a case where the crystal structures indicate no significant change in the overall interaction or dynamics, the loss of the first, no matter which one, is small, while the second produces a large effect. Also the hydrophobic effect is a major determinant of binding that is not easily incorporated into the exact 'Newtonian' approach to molecular mechanics. Combination of 'atomic solvation parameters' with the force fields that attempt to describe the 'Newtonian' forces yields improved results. Yet the necessity for empirical solvation parameters also reminds us that molecular mechanics does not yet adequately deal with the solvation and desolvation that occurs when a ligand binds to a macromolecule.

In principle, freeing of an ordered water molecule while an interface is made can yield an entropy-driven advantage of about ΔGTΔS = RT In W where W is the number of degrees of freedom generated. Thus W ≈ 6 for a freed water molecule (three translational and three rotational degrees of freedom) so ΔG ≈ 0.6 In 6 ≈ 1.2 kcal mol-1, or a factor of 10 in Kd. Displacement of water contributed to the binding affinity of the human immunodeficiency virus (HIV) protease cyclic urea drugs produced by Merck. However, other water molecules become ordered to make the interface. We are not yet able to pay enough attention to the enormous contribution of water. Free energy perturbation methods provide one of the more exact ways of taking what we know of molecular interactions and testing small alterations to ask what will be the consequence.

One limitation to structure-based guidance of drug discovery lies in crystallography itself. Clearly in order to predict molecular behavior we need more than a single set of coordinates that represents the electronic structure of a molecular complex, which is what the usual interpretation of a crystal structure most readily provides. Flexibility is both a key to the ability of the protein to adapt to and bind different chemical moieties, and a basis for relatively poor compatibility of others. Thus it is an essential element in computational approaches. From its inception protein crystallography has struggled to minimize the parameters used to define structure, in order to maintain a good data to parameter ratio, a typical ratio being — 10:1 at a resolution of ~2 Å. The results are usually a median structure, perhaps with occasional alternative conformations in certain positions, and a set of isotropic B factors, which are used to account for thermal vibrations. The parsimonious set of parameters defining the structure is all that can be justified based on the number of observations. With restraints in refinement there are perhaps four to five torsional parameters and eight 'B factors' per amino acid. The B factors really represent the radius of a distribution of trapped states in their librational trajectories that are far from harmonic, or anharmonic. The focus of crystallography has been on developing technology that ensures the quality of the atomic-level median structures. Rather different issues are important for drug development. Drug discovery needs an understanding of the dynamic trajectories of the protein chain, the change in the drug lead on forming a complex with the target, and the change in solvent organization, in order to estimate the various steric, strain, and entropic contributions to binding. Very few of us have interpreted X-ray structures in a way that would allow incorporation of anharmonic motion and of changes in entropy of the side chains and water into structure-guided ligand design. This remains a 21st century challenge that is within reach even now. At the very least protein structures used for these purposes should refine a manifold of conformers in the region of any binding site.

One of the ways to appreciate the nature of the prospects and limitations is to track the history of efforts in one highly validated drug target of high impact for human health. These criteria guarantee that the best attempts from all routes available for drug discovery have been stretched and harnessed cooperatively with each other. Therefore they provide tracks to successful drugs, cautionary tales, lessons for how methods can contribute to the process, and a focus on where new improvements are most needed. Thymidylate synthase (TS) is one such example. It has been recognized as a target for antiproliferative anticancer drugs for 50 years, the first structure was determined 20 years ago, and the landscape has been comprehensively mapped ever since. Its mechanism is among the most thoroughly characterized by chemistry, mutational analysis, and crystal structures.'

1.3 Lessons in Structure-based Drug Design from Thymidylate Synthase

1.3.1 Mechanism-based Inhibitors and Enzyme-catalyzed Therapeutics

In our own laboratory and elsewhere TS has been a rewarding test bed for computational methods of structure-guided drug discovery. TS is essential for DNA replication, especially in transformed cells, and so has been one of the key targets for anticancer drugs. It became one of the first targets for mechanism-based drug design in 1983, with Heidelberger's discovery of 5-fluorouracil. 5-Fluorouracil is converted into 5-fluoro-2'-deoxyuridine-5'-monophosphate (FdUMP), an analog of the TS substrate 2'-deoxyuridine-5'-monophosphate (dUMP), inside cells. Like dUMP, FdUMP forms a covalent ternary complex with the enzyme and cofactor, but is unable to proceed through subsequent catalytic steps. Structure-activity relationships (SARs) for other dUMP analogs indicated that only C5 on the dUMP base could be extensively modified without loss of affinity for TS. The reasons for this became clear when we determined the crystal structure of TS. The preformed dUMP-binding site included side chains or backbone amides within hydrogen-bonding distance of every heteroatom of the pyrimidine ring, and only C5, the position that would become methylated to produce thymidine, was surrounded by enough space to accommodate a bulky substituent.

The structure of TS allowed rational design of a new class of mechanism-based drugs, named 'enzyme-catalyzed therapeutic agents' (ECTAs). These compounds were non-toxic C5-substituted dUMP analogs that were metabolized to release toxic compounds in the rapidly dividing cancer cells via the initial steps of TS catalysis. Cancer cells that had developed resistance to TS inhibitors through overproduction of TS were most susceptible to ECTAs.


Excerpted from Computational and Structural Approaches to Drug Discovery by Robert M Stroud, Janet Finer-Moore. Copyright © 2008 The Royal Society of Chemistry. Excerpted by permission of The Royal Society of Chemistry.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

What People are saying about this

From the Publisher
The book has been carefully constructed as a series of essays, each addressing an important topic, and even better, topics, that there are many misunderstandings and misconceptions about. Each chapter is well referenced and serves as a good entry point into the literature. I would also strongly recommend this book to medicinal chemists leading project teams that use structural information.

Meet the Author

Robert M. Stroud is a professor at the University of California and has been a fellow of the Royal Society of Medicine (UK) since 1992 and a member of the National Academt of Sciences (US) since 2003. His prestigious career spans over 35 years and he is and has served on the scientific advisory boards of many companies and institutions including the National Cancer Institute, the Neutron Diffraction facility, Axys Pharmaceuticals, and Sunesis Phamraceuticals. Janet Finer-Moore is a Research Biologist at the University of California Her contribution to the detailed determination of the structural and chemical mechanism of a two substrate enzyme and detection of amphipathic helices in protein and gene sequences have perpetuated over 28 publications. She is a member of the AAAS, the ACA, the ACS and the Biophysical Society.

Customer Reviews

Average Review:

Write a Review

and post it to your social network


Most Helpful Customer Reviews

See all customer reviews >