Database Modeling and Design: Logical Design
Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs?

In the extensively revised fourth edition, you’ll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules that are applicable to any SQL-based system. But you’ll also get plenty to help you grow from a new database designer to an experienced designer developing industrial-sized systems.

+ a detailed look at the Unified Modeling Language (UML-2) as well as the entity-relationship (ER) approach for data requirements specification and conceptual modeling--with examples throughout the book in both approaches!
+ the details and examples of how to use data modeling concepts in logical database design, and the transformation of the conceptual model to the relational model and to SQL syntax;
+ the fundamentals of database normalization through the fifth normal form;
+ practical coverage of the major issues in business intelligence--data warehousing, OLAP for decision support systems, and data mining;
+ examples for how to use the most popular CASE tools to handle complex data modeling problems.
+ Exercises that test understanding of all material, plus solutions for many exercises.
1116730685
Database Modeling and Design: Logical Design
Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs?

In the extensively revised fourth edition, you’ll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules that are applicable to any SQL-based system. But you’ll also get plenty to help you grow from a new database designer to an experienced designer developing industrial-sized systems.

+ a detailed look at the Unified Modeling Language (UML-2) as well as the entity-relationship (ER) approach for data requirements specification and conceptual modeling--with examples throughout the book in both approaches!
+ the details and examples of how to use data modeling concepts in logical database design, and the transformation of the conceptual model to the relational model and to SQL syntax;
+ the fundamentals of database normalization through the fifth normal form;
+ practical coverage of the major issues in business intelligence--data warehousing, OLAP for decision support systems, and data mining;
+ examples for how to use the most popular CASE tools to handle complex data modeling problems.
+ Exercises that test understanding of all material, plus solutions for many exercises.
67.95 In Stock
Database Modeling and Design: Logical Design

Database Modeling and Design: Logical Design

Database Modeling and Design: Logical Design

Database Modeling and Design: Logical Design

eBook

$67.95 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs?

In the extensively revised fourth edition, you’ll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules that are applicable to any SQL-based system. But you’ll also get plenty to help you grow from a new database designer to an experienced designer developing industrial-sized systems.

+ a detailed look at the Unified Modeling Language (UML-2) as well as the entity-relationship (ER) approach for data requirements specification and conceptual modeling--with examples throughout the book in both approaches!
+ the details and examples of how to use data modeling concepts in logical database design, and the transformation of the conceptual model to the relational model and to SQL syntax;
+ the fundamentals of database normalization through the fifth normal form;
+ practical coverage of the major issues in business intelligence--data warehousing, OLAP for decision support systems, and data mining;
+ examples for how to use the most popular CASE tools to handle complex data modeling problems.
+ Exercises that test understanding of all material, plus solutions for many exercises.

Product Details

ISBN-13: 9780080470771
Publisher: Morgan Kaufmann Publishers
Publication date: 08/05/2010
Series: The Morgan Kaufmann Series in Data Management Systems
Sold by: Barnes & Noble
Format: eBook
Pages: 296
File size: 3 MB

About the Author

Toby J. Teorey is a professor in the Electrical Engineering and Computer Science Department at the University of Michigan, Ann Arbor. He received his B.S. and M.S. degrees in electrical engineering from the University of Arizona, Tucson, and a Ph.D. in computer sciences from the University of Wisconsin, Madison. He was general chair of the 1981 ACM SIGMOD Conference and program chair for the 1991 Entity-Relationship Conference. Professor Teorey’s current research focuses on database design and data warehousing, OLAP, advanced database systems, and performance of computer networks. He is a member of the ACM and the IEEE Computer Society.
Sam Lightstone is a Senior Technical Staff Member and Development Manager with IBM’s DB2 product development team. His work includes numerous topics in autonomic computing and relational database management systems. He is cofounder and leader of DB2’s autonomic computing R&D effort. He is Chair of the IEEE Data Engineering Workgroup on Self Managing Database Systems and a member of the IEEE Computer Society Task Force on Autonomous and Autonomic Computing. In 2003 he was elected to the Canadian Technical Excellence Council, the Canadian affiliate of the IBM Academy of Technology. He is an IBM Master Inventor with over 25 patents and patents pending; he has published widely on autonomic computing for relational database systems. He has been with IBM since 1991.
Tom Nadeau is the founder of Aladdin Software (aladdinsoftware.com) and works in the area of data and text mining. He received his B.S. degree in computer science and M.S. and Ph.D. degrees in electrical engineering and computer science from the University of Michigan, Ann Arbor. His technical interests include data warehousing, OLAP, data mining and machine learning. He won the best paper award at the 2001 IBM CASCON Conference.
H.V. Jagadish is a professor in EE and CS at the University of Michigan, Ann Arbor, where he is part of the database group affiliated with the bioinformatics program and the Center for Computational Medicine and Bioinformatics. Prior to joining the Michigan faculty, he spent over a decade at AT&T Bell Laboratories as a research scientist where he became head of the Database division.

Read an Excerpt

Database Modeling & Design: Logical Design


By TOBY TEOREY SAM LIGHTSTONE TOM NADEAU

MORGAN KAUFMANN

Copyright © 2006 Elsevier Inc.
All right reserved.

ISBN: 978-0-08-047077-1


Chapter One

Introduction

Database technology has evolved rapidly in the three decades since the rise and eventual dominance of relational database systems. While many specialized database systems (object-oriented, spatial, multimedia, etc.) have found substantial user communities in the science and engineering fields, relational systems remain the dominant database technology for business enterprises.

Relational database design has evolved from an art to a science that has been made partially implementable as a set of software design aids. Many of these design aids have appeared as the database component of computer-aided software engineering (CASE) tools, and many of them offer interactive modeling capability using a simplified data modeling approach. Logical design—that is, the structure of basic data relationships and their definition in a particular database system—is largely the domain of application designers. These designers can work effectively with tools such as ERwin Data Modeler or Rational Rose with UML, as well as with a purely manual approach. Physical design, the creation of efficient data storage and retrieval mechanisms on the computing platform being used, is typically the domain of the database administrator (DBA). Today's DBAs have a variety of vendor-supplied tools available to help design the most efficient databases. This book is devoted to the logical design methodologies and tools most popular for relational databases today. Physical design methodologies and tools are covered in a separate book.

In this chapter, we review the basic concepts of database management and introduce the role of data modeling and database design in the database life cycle.

1.1 Data and Database Management

The basic component of a file in a file system is a data item, which is the smallest named unit of data that has meaning in the real world—for example, last name, first name, street address, ID number, or political party. A group of related data items treated as a single unit by an application is called a record. Examples of types of records are order, salesperson, customer, product, and department. A file is a collection of records of a single type. Database systems have built upon and expanded these definitions: In a relational database, a data item is called a column or attribute; a record is called a row or tuple; and a file is called a table.

A database is a more complex object; it is a collection of interrelated stored data that serves the needs of multiple users within one or more organizations, that is, interrelated collections of many different types of tables. The motivations for using databases rather than files include greater availability to a diverse set of users, integration of data for easier access to and updating of complex transactions, and less redundancy of data.

A database management system (DBMS) is a generalized software system for manipulating databases. A DBMS supports a logical view (schema, subschema); physical view (access methods, data clustering); data definition language; data manipulation language; and important utilities, such as transaction management and concurrency control, data integrity, crash recovery, and security. Relational database systems, the dominant type of systems for well-formatted business databases, also provide a greater degree of data independence than the earlier hierarchical and network (CODASYL) database management systems. Data independence is the ability to make changes in either the logical or physical structure of the database without requiring reprogramming of application programs. It also makes database conversion and reorganization much easier. Relational DBMSs provide a much higher degree of data independence than previous systems; they are the focus of our discussion on data modeling.

1.2 The Database Life Cycle

The database life cycle incorporates the basic steps involved in designing a global schema of the logical database, allocating data across a computer network, and defining local DBMS-specific schemas. Once the design is completed, the life cycle continues with database implementation and maintenance. This chapter contains an overview of the database life cycle, as shown in Figure 1.1. In succeeding chapters, we will focus on the database design process from the modeling of requirements through logical design (steps I and II below). The result of each step of the life cycle is illustrated with a series of diagrams in Figure 1.2. Each diagram shows a possible form of the output of each step, so the reader can see the progression of the design process from an idea to actual database implementation. These forms are discussed in much more detail in Chapters 2 through 6.

I. Requirements analysis. The database requirements are determined by interviewing both the producers and users of data and using the information to produce a formal requirements specification. That specification includes the data required for processing, the natural data relationships, and the software platform for the database implementation. As an example, Figure 1.2 (step I) shows the concepts of products, customers, salespersons, and orders being formulated in the mind of the end user during the interview process.

II. Logical design. The global schema, a conceptual data model diagram that shows all the data and their relationships, is developed using techniques such as ER or UML. The data model constructs must ultimately be transformed into normalized (global) relations, or tables. The global schema development methodology is the same for either a distributed or centralized database.

a. Conceptual data modeling. The data requirements are analyzed and modeled using an ER or UML diagram that includes, for example, semantics for optional relationships, ternary relationships, supertypes, and subtypes (categories). Processing requirements are typically specified using natural language expressions or SQL commands, along with the frequency of occurrence. Figure 1.2 [step II(a)] shows a possible ER model representation of the product/customer database in the mind of the end user.

b. View integration. Usually, when the design is large and more than one person is involved in requirements analysis, multiple views of data and relationships result. To eliminate redundancy and inconsistency from the model, these views must eventually be "rationalized" (resolving inconsistencies due to variance in taxonomy, context, or perception) and then consolidated into a single global view. View integration requires the use of ER semantic tools such as identification of synonyms, aggregation, and generalization. In Figure 1.2 [step II(b)], two possible views of the product/customer database are merged into a single global view based on common data for customer and order. View integration is also important for application integration.

c. Transformation of the conceptual data model to SQL tables. Based on a categorization of data modeling constructs and a set of mapping rules, each relationship and its associated entities are transformed into a set of DBMS-specific candidate relational tables. We will show these transformations in standard SQL in Chapter 5. Redundant tables are eliminated as part of this process. In our example, the tables in step II(c) of Figure 1.2 are the result of transformation of the integrated ER model in step II(b).

d. Normalization of tables. Functional dependencies (FDs) are derived from the conceptual data model diagram and the semantics of data relationships in the requirements analysis. They represent the dependencies among data elements that are unique identifiers (keys) of entities. Additional FDs that represent the dependencies among key and nonkey attributes within entities can be derived from the requirements specification. Candidate relational tables associated with all derived FDs are normalized (i.e., modified by decomposing or splitting tables into smaller tables) using standard techniques. Finally, redundancies in the data in normalized candidate tables are analyzed further for possible elimination, with the constraint that data integrity must be preserved. An example of normalization of the Salesperson table into the new Salesperson and SalesVacations tables is shown in Figure 1.2 from step II(c) to step II(d).

We note here that database tool vendors tend to use the term logical model to refer to the conceptual data model, and they use the term physical model to refer to the DBMS-specific implementation model (e.g., SQL tables). Note also that many conceptual data models are obtained not from scratch, but from the process of reverse engineering from an existing DBMS-specific schema [Silberschatz, Korth, and Sudarshan, 2002].

III. Physical design. The physical design step involves the selection of indexes (access methods), partitioning, and clustering of data. The logical design methodology in step II simplifies the approach to designing large relational databases by reducing the number of data dependencies that need to be analyzed. This is accomplished by inserting conceptual data modeling and integration steps [steps II(a) and II(b) of Figure 1.2] into the traditional relational design approach. The objective of these steps is an accurate representation of reality. Data integrity is preserved through normalization of the candidate tables created when the conceptual data model is transformed into a relational model. The purpose of physical design is to optimize performance as closely as possible.

As part of the physical design, the global schema can sometimes be refined in limited ways to reflect processing (query and transaction) requirements if there are obvious, large gains to be made in efficiency. This is called denormalization. It consists of selecting dominant processes on the basis of high frequency, high volume, or explicit priority; defining simple extensions to tables that will improve query performance; evaluating total cost for query, update, and storage; and considering the side effects, such as possible loss of integrity. This is particularly important for Online Analytical Processing (OLAP) applications.

IV. Database implementation, monitoring, and modification. Once the design is completed, the database can be created through implementation of the formal schema using the data definition language (DDL) of a DBMS. Then the data manipulation language (DML) can be used to query and update the database, as well as to set up indexes and establish constraints, such as referential integrity. The language SQL contains both DDL and DML constructs; for example, the create table command represents DDL, and the select command represents DML.

As the database begins operation, monitoring indicates whether performance requirements are being met. If they are not being satisfied, modifications should be made to improve performance. Other modifications may be necessary when requirements change or when the end users' expectations increase with good performance. Thus, the life cycle continues with monitoring, redesign, and modifications. In the next two chapters we look first at the basic data modeling concepts and then—starting in Chapter 4—we apply these concepts to the database design process.

(Continues...)



Excerpted from Database Modeling & Design: Logical Design by TOBY TEOREY SAM LIGHTSTONE TOM NADEAU Copyright © 2006 by Elsevier Inc.. Excerpted by permission of MORGAN KAUFMANN. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Chapter 1 Introduction
Chapter 2 The Entity-Relationship Model
Chapter 3 Unified Modeling Language
Chapter 4 Requirements Analysis and Conceptual Data Modeling
Chapter 5 Transforming the Conceptual Data Model to SQL
Chapter 6 Normalization
Chapter 7 An Example of Logical Database Design
Chapter 8 Business Intelligence
Chapter 9 CASE Tools for Logical Database Design
Appendix The Basics of SQL

What People are Saying About This

From the Publisher

Extensively revised edition of the classic logical database design reference.

From the B&N Reads Blog

Customer Reviews