How to Think About Algorithms

How to Think About Algorithms

by Jeff Edmonds
ISBN-10:
0521614104
ISBN-13:
2900521614107
Pub. Date:
05/19/2008
Publisher:
How to Think About Algorithms

How to Think About Algorithms

by Jeff Edmonds
$34.81
Current price is , Original price is $62.99. You
$62.99 
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

$34.81  $62.99 Save 45% Current price is $34.81, Original price is $62.99. You Save 45%.
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

Overview

There are many algorithm texts that provide lots of well-polished code and proofs of correctness. Instead, this one presents insights, notations, and analogies to help the novice describe and think about algorithms like an expert. It is a bit like a carpenter studying hammers instead of houses. Jeff Edmonds provides both the big picture and easy step-by-step methods for developing algorithms, while avoiding the common pitfalls. Paradigms such as loop invariants and recursion help to unify a huge range of algorithms into a few meta-algorithms. Part of the goal is to teach students to think abstractly. Without getting bogged down in formal proofs, the book fosters deeper understanding so that how and why each algorithm works is transparent. These insights are presented in a slow and clear manner accessible to second- or third-year students of computer science, preparing them to find on their own innovative ways to solve problems.

Product Details

ISBN-13: 2900521614107
Publication date: 05/19/2008
Pages: 472
Product dimensions: 6.00(w) x 1.25(h) x 9.00(d)

About the Author

Jeff Edmonds received his Ph.D. in 1992 at University of Toronto in theoretical computer science. His thesis proved that certain computation problems require a given amount of time and space. He did his postdoctorate work at the ICSI in Berkeley on secure multi-media data transmission and in 1995 became an Associate Professor in the Department of Computer Science at York University, Canada. He has taught their algorithms course thirteen times to date. He has worked extensively at IIT Mumbai, India, and University of California San Diego. He is well published in the top theoretical computer science journals in topics including complexity theory, scheduling, proof systems, probability theory, combinatorics, and, of course, algorithms.

Table of Contents

Preface xi

Introduction 1

Part 1 Iterative Algorithms and Loop Invariants

1 Iterative Algorithms: Measures of Progress and Loop Invariants 5

1.1 A Paradigm Shift: A Sequence of Actions vs. a Sequence of Assertions 5

1.2 The Steps to Develop an Iterative Algorithm 8

1.3 More about the Steps 12

1.4 Different Types of Iterative Algorithms 21

1.5 Typical Errors 26

1.6 Exercises 27

2 Examples Using More-of-the-Input Loop Invariants 29

2.1 Coloring the Plane 29

2.2 Deterministic Finite Automaton 31

2.3 More of the Input vs. More of the Output 39

3 Abstract Data Types 43

3.1 Specifications and Hints at Implementations 44

3.2 Link List Implementation 51

3.3 Merging with a Queue 56

3.4 Parsing with a Stack 57

4 Narrowing the Search Space: Binary Search 60

4.1 Binary Search Trees 60

4.2 Magic Sevens 62

4.3 VLSI Chip Testing 65

4.4 Exercises 69

5 Iterative Sorting Algorithms 71

5.1 Bucket Sort by Hand 71

5.2 Counting Sort (a Stable Sort) 72

5.3 Radix Sort 75

5.4 Radix Counting Sort 76

6 Euclid's GCD Algorithm 79

7 The Loop Invariant for Lower Bounds 85

Part 2 Recursion

8 Abstractions, Techniques, and Theory 97

8.1 Thinking about Recursion 97

8.2 Looking Forward vs. Backward 99

8.3 With a Little Help from Your Friends 100

8.4 The Towers of Hanoi 102

8.5 Checklist for Recursive Algorithms 104

8.6 The Stack Frame 110

8.7 Proving Correctness with Strong Induction 112

9 Some Simple Examples of Recursive Algorithms 114

9.1 Sorting and Selecting Algorithms 114

9.2 Operations on Integers 122

9.3 Ackermann's Function 127

9.4 Exercises 128

10 Recursion on Trees 130

10.1 Tree Traversals 133

10.2 SimpleExamples 135

10.3 Generalizing the Problem Solved 138

10.4 Heap Sort and Priority Queues 141

10.5 Representing Expressions with Trees 149

11 Recursive Images 153

11.1 Drawing a Recursive Image from a Fixed Recursive and a Base Case Image 153

11.2 Randomly Generating a Maze 156

12 Parsing with Context-Free Grammars 159

Part 3 Optimization Problems

13 Definition of Optimization Problems 171

14 Graph Search Algorithms 173

14.1 A Generic Search Algorithm 174

14.2 Breadth-First Search for Shortest Paths 179

14.3 Dijkstra's Shortest-Weighted-Path Algorithm 183

14.4 Depth-First Search 188

14.5 Recursive Depth-First Search 192

14.6 Linear Ordering of a Partial Order 194

14.7 Exercise 196

15 Network Flows and Linear Programming 198

15.1 A Hill-Climbing Algorithm with a Small Local Maximum 200

15.2 The Primal-Dual Hill-Climbing Method 206

15.3 The Steepest-Ascent Hill-Climbing Algorithm 214

15.4 Linear Programming 219

15.5 Exercises 223

16 Greedy Algorithms 225

16.1 Abstractions, Techniques, and Theory 225

16.2 Examples of Greedy Algorithms 236

16.2.1 Example: The Job/Event Scheduling Problem 236

16.2.2 Example: The Interval Cover Problem 240

16.2.3 Example: The Minimum-Spanning-Tree Problem 244

16.3 Exercises 250

17 Recursive Backtracking 251

17.1 Recursive Backtracking Algorithms 251

17.2 The Steps in Developing a Recursive Backtracking 256

17.3 Pruning Branches 260

17.4 Satisfiability 261

17.5 Exercises 265

18 Dynamic Programming Algorithms 267

18.1 Start by Developing a Recursive Backtracking 267

18.2 The Steps in Developing a Dynamic Programming Algorithm 271

18.3 Subtle Points 277

18.3.1 The Question for the Little Bird 278

18.3.2 Subinstances and Subsolutions 281

18.3.3 The Set of Subinstances 284

18.3.4 Decreasing Time and Space 288

18.3.5 Counting the Number of Solutions 291

18.3.6 The New Code 292

19 Examples of Dynamic Programs 295

19.1 The Longest-Common-Subsequence Problem 295

19.2 Dynamic Programs as More-of-the-Input Iterative Loop Invariant Algorithms 300

19.3 A Greedy Dynamic Program: The Weighted Job/Event Scheduling Problem 303

19.4 The Solution Viewed as a Tree: Chains of Matrix Multiplications 306

19.5 Generalizing the Problem Solved: Best AVL Tree 311

19.6 All Pairs Using Matrix Multiplication 314

19.7 Parsing with Context-Free Grammars 315

19.8 Designing Dynamic Programming Algorithms via Reductions 318

20 Reductions and NP-Completeness 324

20.1 Satisfiability Is at Least as Hard as Any Optimization Problem 326

20.2 Steps to Prove NP-Completeness 330

20.3 Example: 3-Coloring Is NP-Complete 338

20.4 An Algorithm for Bipartite Matching Using the Network Flow Algorithm 342

21 Randomized Algorithms 346

21.1 Using Randomness to Hide the Worst Cases 347

21.2 Solutions of Optimization Problems with a Random Structure 350

Part 4 Appendix

22 Existential and Universal Quantifiers 357

23 Time Complexity 366

23.1 The Time (and Space) Complexity of an Algorithm 366

23.2 The Time Complexity of a Computational Problem 371

24 Logarithms and Exponentials 374

25 Asymptotic Growth 377

25.1 Steps to Classify a Function 379

25.2 More about Asymptotic Notation 384

26 Adding-Made-Easy Approximations 388

26.1 The Technique 389

26.2 Some Proofs for the Adding-Made-Easy Technique 393

27 Recurrence Relations 398

27.1 The Technique 398

27.2 Some Proofs 401

28 A Formal Proof of Correctness 408

Part 5 Exercise Solutions 411

Conclusion 437

Index 439

From the B&N Reads Blog

Customer Reviews