 Shopping Bag ( 0 items )

All (14) from $35.72

New (7) from $122.87

Used (7) from $35.72
More About This Textbook
Overview
In this first edition book, methods are discussed for doing inference in Bayesian networks and inference diagrams. Hundreds of examples and problems allow readers to grasp the information. Some of the topics discussed include Pearl's message passing algorithm, Parameter Learning: 2 Alternatives, Parameter Learning r Alternatives, Bayesian Structure Learning, and ConstraintBased Learning. For expert systems developers and decision theorists.
Product Details
Related Subjects
Meet the Author
Richard E. Neapolitan has been a researcher in Bayesian networks and the area of uncertainty in artificial intelligence since the mid1980s. In 1990, he wrote the seminal text, Probabilistic Reasoning in Expert Systems, which helped to unify the field of Bayesian networks. Dr. Neapolitan has published numerous articles spanning the fields of computer science, mathematics, philosophy of science, and psychology. Dr. Neapolitan is currently professor and chair of Computer Science at Northeastern Illinois University.
Read an Excerpt
Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and for doing probabilistic inference with those variables. During the 1980s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and to establish a textbook and reference for the field which has come to be known as "Bayesian networks." The 1990s saw the emergence of excellent algorithms for learning Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for "learning Bayesian networks." Similar to my purpose a decade ago, the goal of this text is to provide such a source.
In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables. . These algorithms are Pearl's messagepassing algorithm and D'Ambrosio and Li's symbolic probabilistic inference algorithm. It may seem odd that I present Pearl's algorithm, since it is one of the oldest. I have two reasons for doing this: (1) Pearl's algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and (2) Pearl's algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.
The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; Chapter 2 discusses further relationships between DAGs and probability distributions such as dseparation, the faithfulness condition, and the minimality condition. Chapters 35 concern inference. Chapter 3 covers Pearl's messagepassing algorithm, D'Ambrosio and Li's symbolic probabilistic inference, and the relationship of Pearl's algorithm to human causal reasoning. Chapter 4 presents an algorithm for doing inference with continuous variables, an approximate inference algorithm, and an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 610 address learning. Chapters 6 and 7 are concerned with parameter learning. Since the notation for these learning algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter 6. I then generalize to multinomial variables in Chapter 7. Furthermore, in Chapter 7, I discuss learning parameters when the variables are continuous. Chapters 8, 9, and 10 are concerned with structure learning. Chapter 8 presents the Bayesian method for learning structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraintbased method for learning structure. Chapter 10 compares the Bayesian and constraintbased methods, and it presents several realworld examples of learning Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.
This is a text on learning Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeastern Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.
I would like to thank those researchers who have provided valuable corrections, comments, and dialog concerning the material in this text. They include Bruce D'Ambrosio, David Maxwell Chickering, Gregory Cooper, Tom Dean, Carl Entemann, John Erickson, Finn Jensen, Clark Glymour, Piotr Gmytrasiewicz, David Heckerman, Xia Jiang, James Kenevan, Henry Kyburg, Kathryn Blackmond Laskey, Don LaBudde, David Madigan, Christopher Meek, PaulAndre Monney, Scott Morris, Peter Norvig, Judea Pearl, Richard Scheines, Marco Valtorta, Alex Wolpert, and Sandy Zabell. I thank Sue Coyle for helping me draw the cartoon containing the robots. The idea for the cover design was motivated by Eric Horvitz's graphic for the UAI '97 web page. I thank Mark McKernin for creating a stunning cover using that idea as a seed.
Table of Contents
Preface.
I. BASICS.
1. Introduction to Bayesian Networks.
2. More DAG/Probability Relationships.
II. INFERENCE.
3. Inference: Discrete Variables.
4. More Inference Algorithms.
5. Influence Diagrams.
III. LEARNING.
6. Parameter Learning: Binary Variables.
7. More Parameter Learning.
8. Bayesian Structure Learning.
9. Approximate Bayesian Structure Learning.
10. ConstraintBased Learning.
11. More Structure Learning.
IV. APPICATIONS.
12. Applications.
Bibliography.
Index.
Preface
Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and for doing probabilistic inference with those variables. During the 1980s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and to establish a textbook and reference for the field which has come to be known as "Bayesian networks." The 1990s saw the emergence of excellent algorithms for learning Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for "learning Bayesian networks." Similar to my purpose a decade ago, the goal of this text is to provide such a source.
In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables. . These algorithms are Pearl's messagepassing algorithm and D'Ambrosio and Li's symbolic probabilistic inference algorithm. It may seem odd that I present Pearl's algorithm, since it is one of the oldest. I have two reasons for doing this: (1) Pearl's algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and (2) Pearl's algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.
The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; Chapter 2 discusses further relationships between DAGs and probability distributions such as dseparation, the faithfulness condition, and the minimality condition. Chapters 35 concern inference. Chapter 3 covers Pearl's messagepassing algorithm, D'Ambrosio and Li's symbolic probabilistic inference, and the relationship of Pearl's algorithm to human causal reasoning. Chapter 4 presents an algorithm for doing inference with continuous variables, an approximate inference algorithm, and an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 610 address learning. Chapters 6 and 7 are concerned with parameter learning. Since the notation for these learning algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter 6. I then generalize to multinomial variables in Chapter 7. Furthermore, in Chapter 7, I discuss learning parameters when the variables are continuous. Chapters 8, 9, and 10 are concerned with structure learning. Chapter 8 presents the Bayesian method for learning structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraintbased method for learning structure. Chapter 10 compares the Bayesian and constraintbased methods, and it presents several realworld examples of learning Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.
This is a text on learning Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeastern Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.
I would like to thank those researchers who have provided valuable corrections, comments, and dialog concerning the material in this text. They include Bruce D'Ambrosio, David Maxwell Chickering, Gregory Cooper, Tom Dean, Carl Entemann, John Erickson, Finn Jensen, Clark Glymour, Piotr Gmytrasiewicz, David Heckerman, Xia Jiang, James Kenevan, Henry Kyburg, Kathryn Blackmond Laskey, Don LaBudde, David Madigan, Christopher Meek, PaulAndre Monney, Scott Morris, Peter Norvig, Judea Pearl, Richard Scheines, Marco Valtorta, Alex Wolpert, and Sandy Zabell. I thank Sue Coyle for helping me draw the cartoon containing the robots. The idea for the cover design was motivated by Eric Horvitz's graphic for the UAI '97 web page. I thank Mark McKernin for creating a stunning cover using that idea as a seed.