 Shopping Bag ( 0 items )

All (13) from $59.97

New (6) from $144.34

Used (7) from $59.99
More About This Textbook
Overview
Filling the void left by other algorithms books, Algorithms and Data Structures provides an approach that emphasizes design techniques. The volume includes application of algorithms, examples, endofsection exercises, endofchapter exercises, hints and solutions to selected exercises, figures and notes to help the reader master the design and analysis of algorithms. This volume covers data structures, searching techniques, dividedandconquer sorting and selection, greedy algorithms, dynamic programming, text searching, computational algebra, P and NP and parallel algorithms. For those interested in a better understanding of algorithms.
Product Details
Related Subjects
Meet the Author
Richard Johnsonbaugh is Professor Emeritus of Computer Science at DePaul University. He has degrees in computer science and mathematics from the University of Oregon, Yale University, and the University of Illinois at Chicago. He is the author of numerous articles and books, including Discrete Mathematics, Fifth Edition, and, with coauthor Martin Kalin, ObjectOriented Programming in C++, Second Edition, Applications Programming in C++, and Applications Programming in ANSI C, Third Edition.
Marcus Schaefer is Assistant Professor of Computer Science at DePaul University. He holds degrees in computer science and mathematics from the University of Chicago and the Universitat Karlsruhe. He has authored and coauthored several articles on complexity theory, computability, and graph theory.
Read an Excerpt
Intended for an upperlevel undergraduate or graduate course in algorithms, this book is based on our combined 25 years of experience in teaching this course. Our major goals in writing this book were to
Faced with a new computational problem, a designer will often be able to solve it by using one of the algorithms in this book, perhaps after modifying or adapting it slightly. However, some problems cannot be solved by any of the algorithms in this book. For this reason, we present a repertoire of design techniques that can be used to solve the problem and help the reader to develop intuition about which techniques are likely to succeed. The chapters on NPcompleteness and how to deal with it also tell how to recognize problems that are hard to solve and which techniques are available in that case.
Working with algorithms should be fun and exciting. The design of algorithms is a creative task requiring the solution of new problems and old problems in disguise. To be successful, we believe that it is important to enjoy the challenge that a new problem poses. To this end, we have included more examples and exercises of a combinatorial and recreational nature than is typical for a book of this type. All too often the challenge of an unsolved problem is experienced as a threat rather than as an opportunity, and we hope that these examples and exercises help to remove the threat.
Examples of realword applications of algorithms in this book include data compression in Section 7.5, and the BoyerMooreHorspool algorithm in Section 9.4, which is used as part of the implementation of agrep. Most sections of the book introduce a motivating example in the first paragraph. The closestpair problem (Section 5.3) begins with a pattern recognition example, and Section 8.4, which is concerned with the longestcommonsubsequence problem, begins with a discussion of the analysis of proteins.
Algorithm design and analysis are best learned by experience. For this reason, we provide large numbers of worked examples and exercises. Worked examples show how to deal with algorithms, and exercises let the reader practice the techniques. There are over 300 worked examples throughout the book. These examples clarify and show how to develop algorithms, demonstrate applications of the theory, elucidate proofs, and help to motivate the material. The book contains over 1450 exercises, from routine to challenging, which were carefully developed through classroom testing. Close attention was paid to clarity and precision. Because some instant feedback is essential for students, we provide answers to about onethird of the endofsection exercises (marked with "S" in the exercises) in the back of the book. Solutions to the remaining endofsection exercises are reserved for instructors (see the Instructor Supplement section that follows).
Prerequisites
The principal computer science prerequisite is a data structures course that covers stacks, queues, linked lists, trees, and graphs. A course in discrete mathematics that covers logic, asymptotic notation (e.g., "big oh" notation), and recurrence relations and their solution by iteration is the main mathematics prerequisite. We do not use advanced methods such as generating functions. In one or two places, we use some basic concepts from calculus. The mathematics topics and data structures used in this book are summarized in Chapters 2 and 3. Some or all of these chapters can be used for reference or review or incorporated into an algorithms course as needed.
Content
Following the first three chapters (containing an introduction, mathematics topics, and data structures), the book presents five chapters that emphasize design techniques.
Chapter 4 features searching techniques, including novel applications such as regionfinding in digital pictures.
The divideandconquer technique is introduced in Chapter 5. Among the problems considered are a tiling problem, finding the closest pair of points in the plane, and Strassen's matrixproduct algorithm. Chapter 6 deals with sorting and selection. Divideandconquer is used to develop many of the algorithms in this chapter.
Chapter 7 shows how to use the greedy method to develop algorithms. After showing how to use the greedy method in a simple setting (coin changing), we present Kruskal's algorithm, Prim's algorithm, Dijkstra's algorithm, Huffman's algorithm, and a solution of the continuousknapsack problem.
Chapter 8 covers the technique of dynamic programming. As in Chapter 7, we first show how dynamic programming operates in a simple setting (computing Fibonacci numbers). We next revisit the coinchanging problem (from Chapter 7 on the greedy method) and contrast dynamic programming with the greedy method. We then discuss optimal grouping of matrices, the longestcommonsubsequence problem, and the algorithms of Floyd and Warshall.
Chapter 9 discusses textsearching techniques, including the KnuthMorrisPratt and BoyerMooreHorspool algorithms, and algorithms for nonexact searching.
In Chapter 10, we investigate NPcompletenessâ€”a theoretical approach to recognizing and understanding the limitations of algorithms. We include many examples from different areas such as cellular phone networks, games, and biological computing to illustrate the ubiquity and universality of NPcompleteness.
It is widely believed that NPcomplete problems cannot be solved efficiently by algorithms. Nevertheless, these problems arise in applications and have to be solved in practice. Chapter 11, Coping with NPCompleteness, presents a collection of techniques originating in practice and theory to deal with NPcomplete problems. Among the approaches discussed are approximation, parameterization, and use of heuristics.
Chapter 12 presents fundamental algorithms for parallel architectures, including algorithms for the PRAM and sorting networks, and offers an introduction to computation in distributed environments.
Pedagogy
Each section (except Section 12.1, which is an introductory section) concludes with Section Exercises. The book contains over 1100 Section Exercises. Some of these exercises check for basic understanding of the material (e.g., some ask for a trace of an algorithm), while others check for a deeper understanding of the material (e.g., some investigate alternative algorithms). Exercises that are more challenging than average are indicated with a star, *.
Each chapter ends with a Notes section, which is followed by Chapter Exercises. Notes sections contain suggestions for further reading and pointers to references. Chapter Exercises, some of which have hints, integrate the material of the chapter. The book contains over 350 Chapter Exercises. They are, on the whole, more challenging than the Section Exercises. We have included some very challenging Chapter Exercises marked with two stars. These will probably require instructor guidance, and some are appropriate for a small project.
Lower bounds for problems are integrated into the chapters that discuss those problems rather than being segregated into separate chapters. For example, after presenting several sorting algorithms, we discuss a lower bound for comparisonbased sorting (Section 6.3).
We present and discuss many recent results, for example, parameterized complexity (Section 11.4), a recent area of research.
Algorithms are written in pseudocode that is close to the syntax of the familiar C, C++, and Java family of languages. Data types, semicolons, obscure features of the languages, and so on, are not used because we have found that specifying algorithms by writing actual code obscures the algorithm description and makes it difficult for someone not familiar with the language to understand the algorithm. The pseudocode used is completely described
Figures illustrate concepts, show how algorithms work, elucidate proofs, and motivate the material. Several figures illustrate proofs of theorems. The captions of these figures provide additional explanation and insight into the proofs.
Attention has been given to finding the most direct and comprehensible proofs of correctness as examples, see Theorem 7.2.5 from which the correctness of both Kruskal's and Prim's algorithms are derived and the proof of the correctness of Dijkstra's algorithm (Theorem 7.4.5).
We present several examples and arguments to show that our time bounds for algorithms are sharp. See, for example, the subsection in Section 7.3, Lower Bound Time Estimate, which shows that the upper bound for the worstcase time of Prim's algorithm using a binary heap is sharp, and the discussion just before Theorem 7.5.4, which shows that the upper bound for the worstcase time of Huffman's algorithm is sharp.,
Table of Contents
1. Mathematical Prerequisites.
2. Data Structures.
3. Searching Techniques.
4. DivideandConquer.
5. Sorting and Selection.
6. Greedy Algorithms.
7. Dynamic Programming.
8. Text Searching.
9. Computational Algebra.
10. P and NP.
11. Coping with NPCompleteness.
12. Parallel Algorithms.
References.
Solutions to Selected Exercises.
Index.
Preface
Why We Wrote This Book
Intended for an upperlevel undergraduate or graduate course in algorithms, this book is based on our combined 25 years of experience in teaching this course. Our major goals in writing this book were to
Faced with a new computational problem, a designer will often be able to solve it by using one of the algorithms in this book, perhaps after modifying or adapting it slightly. However, some problems cannot be solved by any of the algorithms in this book. For this reason, we present a repertoire of design techniques that can be used to solve the problem and help the reader to develop intuition about which techniques are likely to succeed. The chapters on NPcompleteness and how to deal with it also tell how to recognize problems that are hard to solve and which techniques are available in that case.
Working with algorithms should be fun and exciting. The design of algorithms is a creative task requiring the solution of new problems and old problems in disguise. To be successful, we believe that it is important to enjoy the challenge that a new problem poses. To this end, we have included more examples and exercises of a combinatorial and recreational nature than is typical for a book of this type. All too often the challenge of an unsolved problem is experienced as a threat rather than as an opportunity, and we hope that these examples and exercises help to remove the threat.
Examples of realword applications of algorithms in this book include data compression in Section 7.5, and the BoyerMooreHorspool algorithm in Section 9.4, which is used as part of the implementation of agrep. Most sections of the book introduce a motivating example in the first paragraph. The closestpair problem (Section 5.3) begins with a pattern recognition example, and Section 8.4, which is concerned with the longestcommonsubsequence problem, begins with a discussion of the analysis of proteins.
Algorithm design and analysis are best learned by experience. For this reason, we provide large numbers of worked examples and exercises. Worked examples show how to deal with algorithms, and exercises let the reader practice the techniques. There are over 300 worked examples throughout the book. These examples clarify and show how to develop algorithms, demonstrate applications of the theory, elucidate proofs, and help to motivate the material. The book contains over 1450 exercises, from routine to challenging, which were carefully developed through classroom testing. Close attention was paid to clarity and precision. Because some instant feedback is essential for students, we provide answers to about onethird of the endofsection exercises (marked with "S" in the exercises) in the back of the book. Solutions to the remaining endofsection exercises are reserved for instructors (see the Instructor Supplement section that follows).
Prerequisites
The principal computer science prerequisite is a data structures course that covers stacks, queues, linked lists, trees, and graphs. A course in discrete mathematics that covers logic, asymptotic notation (e.g., "big oh" notation), and recurrence relations and their solution by iteration is the main mathematics prerequisite. We do not use advanced methods such as generating functions. In one or two places, we use some basic concepts from calculus. The mathematics topics and data structures used in this book are summarized in Chapters 2 and 3. Some or all of these chapters can be used for reference or review or incorporated into an algorithms course as needed.
Content
Following the first three chapters (containing an introduction, mathematics topics, and data structures), the book presents five chapters that emphasize design techniques.
Chapter 4 features searching techniques, including novel applications such as regionfinding in digital pictures.
The divideandconquer technique is introduced in Chapter 5. Among the problems considered are a tiling problem, finding the closest pair of points in the plane, and Strassen's matrixproduct algorithm. Chapter 6 deals with sorting and selection. Divideandconquer is used to develop many of the algorithms in this chapter.
Chapter 7 shows how to use the greedy method to develop algorithms. After showing how to use the greedy method in a simple setting (coin changing), we present Kruskal's algorithm, Prim's algorithm, Dijkstra's algorithm, Huffman's algorithm, and a solution of the continuousknapsack problem.
Chapter 8 covers the technique of dynamic programming. As in Chapter 7, we first show how dynamic programming operates in a simple setting (computing Fibonacci numbers). We next revisit the coinchanging problem (from Chapter 7 on the greedy method) and contrast dynamic programming with the greedy method. We then discuss optimal grouping of matrices, the longestcommonsubsequence problem, and the algorithms of Floyd and Warshall.
Chapter 9 discusses textsearching techniques, including the KnuthMorrisPratt and BoyerMooreHorspool algorithms, and algorithms for nonexact searching.
In Chapter 10, we investigate NPcompletenessâ€”a theoretical approach to recognizing and understanding the limitations of algorithms. We include many examples from different areas such as cellular phone networks, games, and biological computing to illustrate the ubiquity and universality of NPcompleteness.
It is widely believed that NPcomplete problems cannot be solved efficiently by algorithms. Nevertheless, these problems arise in applications and have to be solved in practice. Chapter 11, Coping with NPCompleteness, presents a collection of techniques originating in practice and theory to deal with NPcomplete problems. Among the approaches discussed are approximation, parameterization, and use of heuristics.
Chapter 12 presents fundamental algorithms for parallel architectures, including algorithms for the PRAM and sorting networks, and offers an introduction to computation in distributed environments.
Pedagogy
Each section (except Section 12.1, which is an introductory section) concludes with Section Exercises. The book contains over 1100 Section Exercises. Some of these exercises check for basic understanding of the material (e.g., some ask for a trace of an algorithm), while others check for a deeper understanding of the material (e.g., some investigate alternative algorithms). Exercises that are more challenging than average are indicated with a star, *.
Each chapter ends with a Notes section, which is followed by Chapter Exercises. Notes sections contain suggestions for further reading and pointers to references. Chapter Exercises, some of which have hints, integrate the material of the chapter. The book contains over 350 Chapter Exercises. They are, on the whole, more challenging than the Section Exercises. We have included some very challenging Chapter Exercises marked with two stars. These will probably require instructor guidance, and some are appropriate for a small project.
Lower bounds for problems are integrated into the chapters that discuss those problems rather than being segregated into separate chapters. For example, after presenting several sorting algorithms, we discuss a lower bound for comparisonbased sorting (Section 6.3).
We present and discuss many recent results, for example, parameterized complexity (Section 11.4), a recent area of research.
Algorithms are written in pseudocode that is close to the syntax of the familiar C, C++, and Java family of languages. Data types, semicolons, obscure features of the languages, and so on, are not used because we have found that specifying algorithms by writing actual code obscures the algorithm description and makes it difficult for someone not familiar with the language to understand the algorithm. The pseudocode used is completely described
Figures illustrate concepts, show how algorithms work, elucidate proofs, and motivate the material. Several figures illustrate proofs of theorems. The captions of these figures provide additional explanation and insight into the proofs.
Attention has been given to finding the most direct and comprehensible proofs of correctness as examples, see Theorem 7.2.5 from which the correctness of both Kruskal's and Prim's algorithms are derived and the proof of the correctness of Dijkstra's algorithm (Theorem 7.4.5).
We present several examples and arguments to show that our time bounds for algorithms are sharp. See, for example, the subsection in Section 7.3, Lower Bound Time Estimate, which shows that the upper bound for the worstcase time of Prim's algorithm using a binary heap is sharp, and the discussion just before Theorem 7.5.4, which shows that the upper bound for the worstcase time of Huffman's algorithm is sharp.,