Mastering Data Modeling / Edition 1

Paperback (Print)
Buy New
Buy New from BN.com
$39.86
Buy Used
Buy Used from BN.com
$29.41
(Save 41%)
Item is in good condition but packaging may have signs of shelf wear/aging or torn packaging.
Condition: Used – Good details
Used and New from Other Sellers
Used and New from Other Sellers
from $4.90
Usually ships in 1-2 business days
(Save 90%)
Other sellers (Paperback)
  • All (13) from $4.90   
  • New (7) from $7.57   
  • Used (6) from $4.90   

Overview

Data modeling is one of the most critical phases in the database application development process, but also the phase most likely to fail. A master data modeler must come into any organization, understand its data requirements, and skillfully model the data for applications that most effectively serve organizational needs.

Mastering Data Modeling is a complete guide to becoming a successful data modeler. Featuring a requirements-driven approach, this book clearly explains fundamental concepts, introduces a user-oriented data modeling notation, and describes a rigorous, step-by-step process for collecting, modeling, and documenting the kinds of data that users need.

Assuming no prior knowledge, Mastering Data Modeling sets forth several fundamental problems of data modeling, such as reconciling the software developer's demand for rigor with the users' equally valid need to speak their own (sometimes vague) natural language. In addition, it describes the good habits that help you respond to these fundamental problems. With these good habits in mind, the book describes the Logical Data Structure (LDS) notation and the process of controlled evolution by which you can create low-cost, user-approved data models that resist premature obsolescence. Also included is an encyclopedic analysis of all data shapes that you will encounter. Most notably, the book describes The Flow, a loosely scripted process by which you and the users gradually but continuously improve an LDS until it faithfully represents the information needs. Essential implementation and technology issues are also covered.

You will learn about such vital topics as:

  • The fundamental problems of data modeling
  • The good habits that help a data modeler be effective and economical
  • LDS notation, which encourages these good habits
  • How to read an LDS aloud--in declarative English sentences
  • How to write a well-formed (syntactically correct) LDS
  • How to get users to name the parts of an LDS with words from their own business vocabulary
  • How to visualize data for an LDS
  • A catalog of LDS shapes that recur throughout all data models
  • The Flow--the template for your conversations with users
  • How to document an LDS for users, data modelers, and technologists
  • How to map an LDS to a relational schema
  • How LDS differs from other notations and why

"Story interludes" appear throughout the book, illustrating real-world successes of the LDS notation and controlled evolution process. Numerous exercises help you master critical skills. In addition, two detailed, annotated sample conversations with users show you the process of controlled evolution in action.

020170045XB04062001

Read More Show Less

Editorial Reviews

Booknews
Carlis (computer science, U. of Minnesota) and Maguire a program manager for Microsoft, explain to information systems analysts and database developers how to become a successful data modeler. Using their own Logical Data Structure for the data modeling notation, they describe in detail the process for collecting, modeling, and documenting data structures and flow. They also analyze all data shapes and provide several recipes for applying them. They provide no bibliographic references. Annotation c. Book News, Inc., Portland, OR (booknews.com)
Read More Show Less

Product Details

  • ISBN-13: 9780201700459
  • Publisher: Addison-Wesley
  • Publication date: 11/9/2000
  • Edition description: New Edition
  • Edition number: 1
  • Pages: 416
  • Sales rank: 1,046,755
  • Product dimensions: 7.24 (w) x 9.05 (h) x 1.10 (d)

Meet the Author

John Carlis is on the faculty in the Department of Computer Science at the University of Minnesota. For the past twenty years he has taught, consulted, and conducted research on database systems, particularly in data modeling and database language extensions. Visit his homepage at www.cs.umn.edu/~carlis.

Joseph Maguire is an independent consultant and the creator of the forthcoming Web site www.logicaldatastructures.com. For the past 18 years he has been an employee or consultant for many companies, including Bachman Information Systems, Digital, Lotus, Microsoft, and US WEST.

020170045XAB04062001

Read More Show Less

Read an Excerpt

This book teaches you the first step of creating software systems: learning about the information needs of a community of stran

This book teaches you the first step of creating software systems: learning about the information needs of a community of strangers. This book is necessary because that step—known as data modeling—is prone to failure.

This book presumes nothing; it starts from first principles and gradually introduces, justifies, and teaches a rigorous process and notation for collecting and expressing the information needs of a business or organization.

This book is for anyone involved in the creation of information-management software. It is particularly useful to the designers of databases and applications driven by database management systems.

In many regards, this book is different from other books about data modeling. First, because it starts from first principles, it encourages you to question what you might already know about data modeling and data-modeling notations. To best serve users, how should the process of data modeling work? To create good, economical software systems, what kind of information should be on a data model? To become an effective data modeler, what skills should you master before talking with users?

Second, this book teaches you the process of data modeling. It doesn't just tell you what you should know; it tells you what to do. You learn fundamental skills, you integrate them into a process, you practice the process, and you become an expert at it. This means that you can become a "content-neutral modeler," moving gracefully among seemingly unrelated projects for seemingly unrelated clients. Because the process of modeling applies equally to all projects, your expertise becomes universally applicable. Being a master data modeler is like being a master statistician who can contribute to a wide array of unrelated endeavors: population studies, political polling, epidemiology, or baseball.

Third, this book does not focus on technology. Instead, it maintains its focus on the process of discovering and articulating the users' information needs, without concern for how those needs can or should be satisfied by any of the myriad technological options available. We do not completely ignore technology; we frequently mention it to remind you that during data modeling, you should ignore it. Users don't care about technology; they care about their information. The notation we use, Logical Data Structures (LDS), encourages you to focus on users' needs. We think a data modeler should conceal technological details from users. But historically, many data modelers are database designers whose everyday working vocabulary is steeped in technology. When technologists talk with users, things can get awkward. In the worst case, users quit the conversation, or they get swept up in the technological details and neglect to paint a complete picture of their technology-independent information needs. Data modeling is not equivalent to database design.

Another undesirable trend: historically, many organizations wrongly think that data modeling can be done only by long-time, richly experienced members of the organization who have reached the status of "unofficial archivist." This is not true. Modeling is a set of skills like computer programming. It can be done by anyone equipped with the skills. In fact, a skilled modeler who is initially unfamiliar with the organization but has access to users will produce a better model than a highly knowledgeable archivist who is unskilled at modeling.

This book has great ambitions for you. To realize them, you cannot read it casually. Remember, we're trying to foster skills in you rather than merely deliver knowledge to you. If you master these skills, you can eventually apply them instinctively.

Study this book the way you would a calculus book or a cookbook. Practice the skills on real-life problems. Work in teams with your classmates or colleagues. Write notes to yourself in the margins. An ambitious book like this, well, we didn't just make it up. For starters, we are indebted to Michael Senko, a pioneer in database systems on whose work ours is based. Beyond him, many people deserve thanks. Most important are the many users we have worked with over the years, studying data: Gordon Decker; George Bluhm and others at the U. S. Soil Conservation Service; Peter O'Kelly and others at Lotus Development Corporation; John Hanna, Tim Dawson, and other employees and consultants at US WEST, Inc.; Jim Brown, Frank Carr, and others at Pacific Northwest National Laboratory; and Jane Goodall, Anne Pusey, Jen Williams, and the entire staff at the University of Minnesota's Center for Primate Studies. Not far behind are our students and colleagues. Among them are several deserving special thanks: Jim Albers, Dave Balaban, Leone Barnett, Doug Barry, Bruce Berra, Diane Beyer, Kelsey Bruso, Jake Chen, Paul Chapman, Jan Drake, Bob Elde, Apostolos Georgopolous, Carol Hartley, Jim Held, Chris Honda, David Jefferson, Verlyn Johnson, Roger King, Joe Konstan, Darryn Kozak, Scott Krieger, Heidi Kvinge, James A. Larson, Sal March, Brad Miller, Jerry Morton, Jose Pardo, Paul Pazandak, Doug Perrin, John Riedl, Maureen Riedl, George Romano, Sue Romano, Karen Ryan, Alex Safonov, Wallie Schmidt, Stephanie Sevcik, Libby Shoop, Tyler Sperry, Pat Starr, Fritz Van Evert, Paul Wagner, Bill Wasserman, George Wilcox, Frank Williams, Mike Young, and several thousand students who used early versions of our work. Thanks also go to Lilly Bridwell-Bowles of the Center for Interdisciplinary Studies of Writing at the University of Minnesota. Several people formally reviewed late drafts of this book and made helpful suggestions:Declan Brady, Paul Irvine Matthew C. Keranen, David Livingstone, and David McGoveran. And finally, thanks to the helpful and pat ent people at Addison-Wesley. Paul Becker, Mariann Kourafas, Mary T. O 'Brien, Ross Venables, Stacie Parillo, Jacquelyn Doucette, the copyeditor, Penny Hull, and the indexer, Ted Laux.

How to Use This Book

To study this book rather than merely read it, you need to understand a bit about what kind of information it contains. The information falls into eight categories.

  • Introduction and justification. Chapters 1 and 2 define the data-modeling problem, introduce the LDS technique and notation, and describe good habits that any data modeler should exhibit. Chapters 22 and 24 justify in more technical detail some of the decisions we made when designing the LDS technique and notation.
  • Definitions. Chapter 4 defines the vocabulary you need to read everything that follows. Chapter 13 defines things more formally—articulating exactly what constitutes a syntactically correct LDS. Chapter 23 presents a formal definition of our Logical Data Structures in a format we especially like—as an LDS.
  • Reading an LDS. Chapter 3 describes how to translate an LDS into declarative sentences. The sentences are typically spoken to users to help them understand an in-progress LDS. Chapter 5 describes how to visualize and annotate sample data for an LDS.
  • Writing an LDS. Chapter 13 describes the syntax rules for writing an LDS. Chapter 14 describes the guidelines for naming the parts of an LDS. Chapter 15 describes some seldom-used names that are part of any LDS. Chapter 16 describes how to label parts of an LDS. (Labels and names differ.) Chapter 17 describes how to document an LDS.
  • LDS shapes and recipes. Chapter 7 introduces the concept of shapes and tells how your expertise with them can make you a master data modeler. Chapters 8 through 12 give an encyclopedic, exhaustive analysis of the shapes you will encounter as a data modeler. Chapter 26 describes some recipes—specific applications of the shapes to common problems encountered by software developers and database designers.
  • Process of LDS development. Chapters 6 and 21 give elaborate examples of the process of LDS development. Chapter 18 describes a step-by-step script, called The Flow, that you follow in your conversations with users. Chapters 19 and 20 describe steps you can take to improve an in-progress LDS at any time—steps that do not fit into the script in any particular place because they fit in every place. Considered as a whole, Chapters 18 through 20 describe the process of controlled evolution, the process by which you guide the users through a conversation that gradually improves the in-progress LDS. "Controlled" implies that the conversation is organized and methodical. "Evolution" implies that the conversation yields a continuously, gradually improving data model.
  • Implementation and technology issues. Chapter 22 describes in detail the forces that compel us to exclude constraints from the LDS notation. Many of these forces stem from implementation issues. Chapter 25 describes a technique for creating a relational schema from an LDS.
  • Critical assessment of the LDS technique and notation. Chapter 24 describes the decisions we made in designing the LDS technique and notation and describes how our decisions differ from those made by the designers of other notations. Chapter 22 is devoted to one such especially noteworthy decision. And throughout the book appear sets of "Story Interludes" which relate anecdotes about our successes and failures learning and using the LDS notation and technique. Taken as a whole, these stories constitute a critical assessment of the technique.
Reading Paths Through This Book

To become a master data modeler, you must appreciate the interplay among four areas of expertise: LDS reading, LDS writing, LDS shapes, and controlled evolution. These four areas are equally important and interrelated. This book presents these four topics in a sensible order, but you cannot master any one of these areas without mastering the other three. Even if you study this book sequentially, when you get to controlled evolution (Chapters 18 through 20), you will find yourself referring to earlier chapters. Controlled evolution integrates virtually everything preceding Chapter 18. As you study that chapter, your incipient mastery of LDS reading, LDS writing, and shapes will be put to the test.

Chapters 3 and 4 are prerequisites to everything that follows. Chapter 13 is a prerequisite to Chapters 14 through 20.

As you work your way toward mastery, you should do the specific exercises at the end of chapters and the whole-skill mastery exercises in the Appendix. You might want to take a peek at Chapter 6 now to get a feel for how a master data modeler works with users.

John Carlis Joseph Maguire September 2000

Read More Show Less

Table of Contents

Foreword.

Preface.

1. Introduction.

2. Good Habits.

3. Reading an LDS with Sentences.

4. Vocabulary of LDS.

5. Visualizing Allowed and Disallowed Instances.

6. A Conversation with Users about Creatures and Skills.

7. Introduction to Mastering Shapes.

8. One-Entity, No-Relationship Shapes.

9. One-Attribute Shapes.

10. Two-Entity Shapes.

11. Shapes with More Than Two Entities.

12. Shapes with Reflexive Relationships.

13. LDS Syntax Rules.

14. Getting the Names Right.

15. Official Name.

16. Labeling Links.

17. Documenting an LDS.

18. Script for Controlled Evolution.

19. Local, Anytime Steps of Controlled Evolution.

20. Global, Anytime Steps of Controlled Evolution.

21. Conversations about Dairy Farming.

22. Constraints.

23. LDS for LDS.

24: Decisions: Designing a Data-Modeling Notation.

25. LDS and the Relational Model.

26: Cookbook: Recipes for Data Modelers.

Appendix: Exercises for Mastery.

Index.

Read More Show Less

Preface

This book teaches you the first step of creating software systems: learning about the information needs of a community of stran

This book teaches you the first step of creating software systems: learning about the information needs of a community of strangers. This book is necessary because that step--known as data modeling--is prone to failure.

This book presumes nothing; it starts from first principles and gradually introduces, justifies, and teaches a rigorous process and notation for collecting and expressing the information needs of a business or organization.

This book is for anyone involved in the creation of information-management software. It is particularly useful to the designers of databases and applications driven by database management systems.

In many regards, this book is different from other books about data modeling. First, because it starts from first principles, it encourages you to question what you might already know about data modeling and data-modeling notations. To best serve users, how should the process of data modeling work? To create good, economical software systems, what kind of information should be on a data model? To become an effective data modeler, what skills should you master before talking with users?

Second, this book teaches you the process of data modeling. It doesn't just tell you what you should know; it tells you what to do. You learn fundamental skills, you integrate them into a process, you practice the process, and you become an expert at it. This means that you can become a "content-neutral modeler," moving gracefully among seemingly unrelated projects for seemingly unrelated clients. Because the process of modeling applies equally to all projects, your expertise becomes universally applicable. Being a master data modeler is like being a master statistician who can contribute to a wide array of unrelated endeavors: population studies, political polling, epidemiology, or baseball.

Third, this book does not focus on technology. Instead, it maintains its focus on the process of discovering and articulating the users' information needs, without concern for how those needs can or should be satisfied by any of the myriad technological options available. We do not completely ignore technology; we frequently mention it to remind you that during data modeling, you should ignore it. Users don't care about technology; they care about their information. The notation we use, Logical Data Structures (LDS), encourages you to focus on users' needs. We think a data modeler should conceal technological details from users. But historically, many data modelers are database designers whose everyday working vocabulary is steeped in technology. When technologists talk with users, things can get awkward. In the worst case, users quit the conversation, or they get swept up in the technological details and neglect to paint a complete picture of their technology-independent information needs. Data modeling is not equivalent to database design.

Another undesirable trend: historically, many organizations wrongly think that data modeling can be done only by long-time, richly experienced members of the organization who have reached the status of "unofficial archivist." This is not true. Modeling is a set of skills like computer programming. It can be done by anyone equipped with the skills. In fact, a skilled modeler who is initially unfamiliar with the organization but has access to users will produce a better model than a highly knowledgeable archivist who is unskilled at modeling.

This book has great ambitions for you. To realize them, you cannot read it casually. Remember, we're trying to foster skills in you rather than merely deliver knowledge to you. If you master these skills, you can eventually apply them instinctively.

Study this book the way you would a calculus book or a cookbook. Practice the skills on real-life problems. Work in teams with your classmates or colleagues. Write notes to yourself in the margins. An ambitious book like this, well, we didn't just make it up. For starters, we are indebted to Michael Senko, a pioneer in database systems on whose work ours is based. Beyond him, many people deserve thanks. Most important are the many users we have worked with over the years, studying data: Gordon Decker; George Bluhm and others at the U. S. Soil Conservation Service; Peter O'Kelly and others at Lotus Development Corporation; John Hanna, Tim Dawson, and other employees and consultants at US WEST, Inc.; Jim Brown, Frank Carr, and others at Pacific Northwest National Laboratory; and Jane Goodall, Anne Pusey, Jen Williams, and the entire staff at the University of Minnesota's Center for Primate Studies. Not far behind are our students and colleagues. Among them are several deserving special thanks: Jim Albers, Dave Balaban, Leone Barnett, Doug Barry, Bruce Berra, Diane Beyer, Kelsey Bruso, Jake Chen, Paul Chapman, Jan Drake, Bob Elde, Apostolos Georgopolous, Carol Hartley, Jim Held, Chris Honda, David Jefferson, Verlyn Johnson, Roger King, Joe Konstan, Darryn Kozak, Scott Krieger, Heidi Kvinge, James A. Larson, Sal March, Brad Miller, Jerry Morton, Jose Pardo, Paul Pazandak, Doug Perrin, John Riedl, Maureen Riedl, George Romano, Sue Romano, Karen Ryan, Alex Safonov, Wallie Schmidt, Stephanie Sevcik, Libby Shoop, Tyler Sperry, Pat Starr, Fritz Van Evert, Paul Wagner, Bill Wasserman, George Wilcox, Frank Williams, Mike Young, and several thousand students who used early versions of our work. Thanks also go to Lilly Bridwell-Bowles of the Center for Interdisciplinary Studies of Writing at the University of Minnesota. Several people formally reviewed late drafts of this book and made helpful suggestions:Declan Brady, Paul Irvine Matthew C. Keranen, David Livingstone, and David McGoveran. And finally, thanks to the helpful and pat ent people at Addison-Wesley. Paul Becker, Mariann Kourafas, Mary T. O 'Brien, Ross Venables, Stacie Parillo, Jacquelyn Doucette, the copyeditor, Penny Hull, and the indexer, Ted Laux.

How to Use This Book

To study this book rather than merely read it, you need to understand a bit about what kind of information it contains. The information falls into eight categories.

  • Introduction and justification. Chapters 1 and 2 define the data-modeling problem, introduce the LDS technique and notation, and describe good habits that any data modeler should exhibit. Chapters 22 and 24 justify in more technical detail some of the decisions we made when designing the LDS technique and notation.

  • Definitions. Chapter 4 defines the vocabulary you need to read everything that follows. Chapter 13 defines things more formally--articulating exactly what constitutes a syntactically correct LDS. Chapter 23 presents a formal definition of our Logical Data Structures in a format we especially like--as an LDS.

  • Reading an LDS. Chapter 3 describes how to translate an LDS into declarative sentences. The sentences are typically spoken to users to help them understand an in-progress LDS. Chapter 5 describes how to visualize and annotate sample data for an LDS.

  • Writing an LDS. Chapter 13 describes the syntax rules for writing an LDS. Chapter 14 describes the guidelines for naming the parts of an LDS. Chapter 15 describes some seldom-used names that are part of any LDS. Chapter 16 describes how to label parts of an LDS. (Labels and names differ.) Chapter 17 describes how to document an LDS.

  • LDS shapes and recipes. Chapter 7 introduces the concept of shapes and tells how your expertise with them can make you a master data modeler. Chapters 8 through 12 give an encyclopedic, exhaustive analysis of the shapes you will encounter as a data modeler. Chapter 26 describes some recipes--specific applications of the shapes to common problems encountered by software developers and database designers.

  • Process of LDS development. Chapters 6 and 21 give elaborate examples of the process of LDS development. Chapter 18 describes a step-by-step script, called The Flow, that you follow in your conversations with users. Chapters 19 and 20 describe steps you can take to improve an in-progress LDS at any time--steps that do not fit into the script in any particular place because they fit in every place. Considered as a whole, Chapters 18 through 20 describe the process of controlled evolution, the process by which you guide the users through a conversation that gradually improves the in-progress LDS. "Controlled" implies that the conversation is organized and methodical. "Evolution" implies that the conversation yields a continuously, gradually improving data model.

  • Implementation and technology issues. Chapter 22 describes in detail the forces that compel us to exclude constraints from the LDS notation. Many of these forces stem from implementation issues. Chapter 25 describes a technique for creating a relational schema from an LDS.

  • Critical assessment of the LDS technique and notation. Chapter 24 describes the decisions we made in designing the LDS technique and notation and describes how our decisions differ from those made by the designers of other notations. Chapter 22 is devoted to one such especially noteworthy decision. And throughout the book appear sets of "Story Interludes" which relate anecdotes about our successes and failures learning and using the LDS notation and technique. Taken as a whole, these stories constitute a critical assessment of the technique.

Reading Paths Through This Book

To become a master data modeler, you must appreciate the interplay among four areas of expertise: LDS reading, LDS writing, LDS shapes, and controlled evolution. These four areas are equally important and interrelated. This book presents these four topics in a sensible order, but you cannot master any one of these areas without mastering the other three. Even if you study this book sequentially, when you get to controlled evolution (Chapters 18 through 20), you will find yourself referring to earlier chapters. Controlled evolution integrates virtually everything preceding Chapter 18. As you study that chapter, your incipient mastery of LDS reading, LDS writing, and shapes will be put to the test.

Chapters 3 and 4 are prerequisites to everything that follows. Chapter 13 is a prerequisite to Chapters 14 through 20.

As you work your way toward mastery, you should do the specific exercises at the end of chapters and the whole-skill mastery exercises in the Appendix. You might want to take a peek at Chapter 6 now to get a feel for how a master data modeler works with users.

John Carlis Joseph Maguire September 2000

020170045XP04062001

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)