BN.com Gift Guide

Learning Bayesian Networks

Hardcover (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $82.58
Usually ships in 1-2 business days
(Save 49%)
Other sellers (Hardcover)
  • All (10) from $82.58   
  • New (8) from $144.17   
  • Used (2) from $82.58   

Overview

In this first edition book, methods are discussed for doing inference in Bayesian networks and inference diagrams. Hundreds of examples and problems allow readers to grasp the information. Some of the topics discussed include Pearl's message passing algorithm, Parameter Learning: 2 Alternatives, Parameter Learning r Alternatives, Bayesian Structure Learning, and Constraint-Based Learning. For expert systems developers and decision theorists.

Read More Show Less

Product Details

Meet the Author

Richard E. Neapolitan has been a researcher in Bayesian networks and the area of uncertainty in artificial intelligence since the mid-1980s. In 1990, he wrote the seminal text, Probabilistic Reasoning in Expert Systems, which helped to unify the field of Bayesian networks. Dr. Neapolitan has published numerous articles spanning the fields of computer science, mathematics, philosophy of science, and psychology. Dr. Neapolitan is currently professor and chair of Computer Science at Northeastern Illinois University.

Read More Show Less

Read an Excerpt

Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and for doing probabilistic inference with those variables. During the 1980s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and to establish a textbook and reference for the field which has come to be known as "Bayesian networks." The 1990s saw the emergence of excellent algorithms for learning Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for "learning Bayesian networks." Similar to my purpose a decade ago, the goal of this text is to provide such a source.

In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables. . These algorithms are Pearl's message-passing algorithm and D'Ambrosio and Li's symbolic probabilistic inference algorithm. It may seem odd that I present Pearl's algorithm, since it is one of the oldest. I have two reasons for doing this: (1) Pearl's algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and (2) Pearl's algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.

The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; Chapter 2 discusses further relationships between DAGs and probability distributions such as d-separation, the faithfulness condition, and the minimality condition. Chapters 3-5 concern inference. Chapter 3 covers Pearl's message-passing algorithm, D'Ambrosio and Li's symbolic probabilistic inference, and the relationship of Pearl's algorithm to human causal reasoning. Chapter 4 presents an algorithm for doing inference with continuous variables, an approximate inference algorithm, and an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 6-10 address learning. Chapters 6 and 7 are concerned with parameter learning. Since the notation for these learning algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter 6. I then generalize to multinomial variables in Chapter 7. Furthermore, in Chapter 7, I discuss learning parameters when the variables are continuous. Chapters 8, 9, and 10 are concerned with structure learning. Chapter 8 presents the Bayesian method for learning structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraint-based method for learning structure. Chapter 10 compares the Bayesian and constraint-based methods, and it presents several real-world examples of learning Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.

This is a text on learning Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeastern Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.

I would like to thank those researchers who have provided valuable corrections, comments, and dialog concerning the material in this text. They include Bruce D'Ambrosio, David Maxwell Chickering, Gregory Cooper, Tom Dean, Carl Entemann, John Erickson, Finn Jensen, Clark Glymour, Piotr Gmytrasiewicz, David Heckerman, Xia Jiang, James Kenevan, Henry Kyburg, Kathryn Blackmond Laskey, Don LaBudde, David Madigan, Christopher Meek, Paul-Andre Monney, Scott Morris, Peter Norvig, Judea Pearl, Richard Scheines, Marco Valtorta, Alex Wolpert, and Sandy Zabell. I thank Sue Coyle for helping me draw the cartoon containing the robots. The idea for the cover design was motivated by Eric Horvitz's graphic for the UAI '97 web page. I thank Mark McKernin for creating a stunning cover using that idea as a seed.

Read More Show Less

Table of Contents

Preface.

I. BASICS.

1. Introduction to Bayesian Networks.

2. More DAG/Probability Relationships.

II. INFERENCE.

3. Inference: Discrete Variables.

4. More Inference Algorithms.

5. Influence Diagrams.

III. LEARNING.

6. Parameter Learning: Binary Variables.

7. More Parameter Learning.

8. Bayesian Structure Learning.

9. Approximate Bayesian Structure Learning.

10. Constraint-Based Learning.

11. More Structure Learning.

IV. APPICATIONS.

12. Applications.

Bibliography.

Index.

Read More Show Less

Preface

Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and for doing probabilistic inference with those variables. During the 1980s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and to establish a textbook and reference for the field which has come to be known as "Bayesian networks." The 1990s saw the emergence of excellent algorithms for learning Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for "learning Bayesian networks." Similar to my purpose a decade ago, the goal of this text is to provide such a source.

In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables. . These algorithms are Pearl's message-passing algorithm and D'Ambrosio and Li's symbolic probabilistic inference algorithm. It may seem odd that I present Pearl's algorithm, since it is one of the oldest. I have two reasons for doing this: (1) Pearl's algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and (2) Pearl's algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.

The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; Chapter 2 discusses further relationships between DAGs and probability distributions such as d-separation, the faithfulness condition, and the minimality condition. Chapters 3-5 concern inference. Chapter 3 covers Pearl's message-passing algorithm, D'Ambrosio and Li's symbolic probabilistic inference, and the relationship of Pearl's algorithm to human causal reasoning. Chapter 4 presents an algorithm for doing inference with continuous variables, an approximate inference algorithm, and an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 6-10 address learning. Chapters 6 and 7 are concerned with parameter learning. Since the notation for these learning algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter 6. I then generalize to multinomial variables in Chapter 7. Furthermore, in Chapter 7, I discuss learning parameters when the variables are continuous. Chapters 8, 9, and 10 are concerned with structure learning. Chapter 8 presents the Bayesian method for learning structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraint-based method for learning structure. Chapter 10 compares the Bayesian and constraint-based methods, and it presents several real-world examples of learning Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.

This is a text on learning Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeastern Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.

I would like to thank those researchers who have provided valuable corrections, comments, and dialog concerning the material in this text. They include Bruce D'Ambrosio, David Maxwell Chickering, Gregory Cooper, Tom Dean, Carl Entemann, John Erickson, Finn Jensen, Clark Glymour, Piotr Gmytrasiewicz, David Heckerman, Xia Jiang, James Kenevan, Henry Kyburg, Kathryn Blackmond Laskey, Don LaBudde, David Madigan, Christopher Meek, Paul-Andre Monney, Scott Morris, Peter Norvig, Judea Pearl, Richard Scheines, Marco Valtorta, Alex Wolpert, and Sandy Zabell. I thank Sue Coyle for helping me draw the cartoon containing the robots. The idea for the cover design was motivated by Eric Horvitz's graphic for the UAI '97 web page. I thank Mark McKernin for creating a stunning cover using that idea as a seed.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)