Now updated—the systematic introductory guide to modern analysis of large data sets
As data sets continue to grow in size and complexity, there has been an inevitable move towards indirect, automatic, and intelligent data analysis in which the analyst works via more complex and sophisticated software tools. This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces to extract new information for decision-making.
This Second Edition of Data Mining: Concepts, Models, Methods, and Algorithms discusses data mining principles and then describes representative state-of-the-art methods and algorithms originating from different disciplines such as statistics, machine learning, neural networks, fuzzy logic, and evolutionary computation. Detailed algorithms are provided with necessary explanations and illustrative examples, and questions and exercises for practice at the end of each chapter. This new edition features the following new techniques/methodologies:
- Support Vector Machines (SVM)—developed based on statistical learning theory, they have a large potential for applications in predictive data mining
- Kohonen Maps (Self-Organizing Maps - SOM)—one of very applicative neural-networks-based methodologies for descriptive data mining and multi-dimensional data visualizations
- DBSCAN, BIRCH, and distributed DBSCAN clustering algorithms—representatives of an important class of density-based clustering methodologies
- Bayesian Networks (BN) methodology often used for causality modeling
- Algorithms for measuring Betweeness and Centrality parameters in graphs, important for applications in mining large social networks
- CART algorithm and Gini index in building decision trees
- Bagging & Boosting approaches to ensemble-learning methodologies, with details of AdaBoost algorithm
- Relief algorithm, one of the core feature selection algorithms inspired by instance-based learning
- PageRank algorithm for mining and authority ranking of web pages
- Latent Semantic Analysis (LSA) for text mining and measuring semantic similarities between text-based documents
- New sections on temporal, spatial, web, text, parallel, and distributed data mining
- More emphasis on business, privacy, security, and legal aspects of data mining technology
This text offers guidance on how and when to use a particular software tool (with the companion data sets) from among the hundreds offered when faced with a data set to mine. This allows analysts to create and perform their own data mining experiments using their knowledge of the methodologies and techniques provided. The book emphasizes the selection of appropriate methodologies and data analysis software, as well as parameter tuning. These critically important, qualitative decisions can only be made with the deeper understanding of parameter meaning and its role in the technique that is offered here.
This volume is primarily intended as a data-mining textbook for computer science, computer engineering, and computer information systems majors at the graduate level. Senior students at the undergraduate level and with the appropriate background can also successfully comprehend all topics presented here.
|Edition description:||Second Edition|
|Product dimensions:||6.40(w) x 9.30(h) x 1.30(d)|
About the Author
MEHMED KANTARDZIC, PhD, is a professor in the Department of Computer Engineering and Computer Science (CECS) in the Speed School of Engineering at the University of Louisville, Director of CECS Graduate Studies, as well as Director of the Data Mining Lab. A member of IEEE, ISCA, and SPIE, Dr. Kantardzic has won awards for several of his papers, has been published in numerous referred journals, and has been an invited presenter at various conferences. He has also been a contributor to numerous books.
Read an Excerpt
Data MiningConcepts, Models, Methods, and Algorithms
By Mehmed Kantardzic
John Wiley & SonsISBN: 0-471-22852-4
Chapter OneData-Mining Concepts
Understand the need for analyses of large, complex, information-rich data sets.
Identify the goals and primary tasks of the data-mining process.
Describe the roots of data-mining technology.
Recognize the iterative character of a data-mining process and specify its basic steps.
Explain the influence of data quality on a data-mining process.
Establish the relation between data warehousing and data mining.
Modern science and engineering are based on using first-principle models to describe physical, biological, and social systems. Such an approach starts with a basic scientific model, such as Newton's laws of motion or Maxwell's equations in electromagnetism, and then builds upon them various applications in mechanical engineering or electrical engineering. In this approach, experimental data are used to verify the underlying first-principle models and to estimate some of the parameters that are difficult or sometimes impossible to measure directly. However, in many domains the underlying first principles are unknown, or the systems under study are too complex to be mathematically formalized. With the growing use of computers, there is a great amount of data being generated by such systems. In the absence of first-principlemodels, such readily available data can be used to derive models by estimating useful relationships between a system's variables (i.e., unknown input-output dependencies). Thus there is currently a paradigm shift from classical modeling and analyses based on first principles to developing models and the corresponding analyses directly from data.
We have grown accustomed gradually to the fact that there are tremendous volumes of data filling our computers, networks, and lives. Government agencies, scientific institutions, and businesses have all dedicated enormous resources to collecting and storing data. In reality, only a small amount of these data will ever be used because, in many cases, the volumes are simply too large to manage, or the data structures themselves are too complicated to be analyzed effectively. How could this happen? The primary reason is that the original effort to create a data set is often focused on issues such as storage efficiency; it does not include a plan for how the data will eventually be used and analyzed.
The need to understand large, complex, information-rich data sets is common to virtually all fields of business, science, and engineering. In the business world, corporate and customer data are becoming recognized as a strategic asset. The ability to extract useful knowledge hidden in these data and to act on that knowledge is becoming increasingly important in today's competitive world. The entire process of applying a computer-based methodology, including new techniques, for discovering knowledge from data is called data mining.
Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an "interesting" outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers.
In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data-mining activities into one of two categories:
1) Predictive data mining, which produces the model of the system described by the given data set, or
2) Descriptive data mining, which produces new, nontrivial information based on the available data set.
On the predictive end of the spectrum, the goal of data mining is to produce a model, expressed as an executable code, which can be used to perform classification, prediction, estimation, or other similar tasks. On the other, descriptive, end of the spectrum, the goal is to gain an understanding of the analyzed system by uncovering patterns and relationships in large data sets. The relative importance of prediction and description for particular data-mining applications can vary considerably. The goals of prediction and description are achieved by using data-mining techniques, explained later in this book, for the following primary data-mining tasks:
1. Classification - discovery of a predictive learning function that classifies a data item into one of several predefined classes.
2. Regression - discovery of a predictive learning function, which maps a data item to a real-value prediction variable.
3. Clustering - a common descriptive task in which one seeks to identify a finite set of categories or clusters to describe the data.
4. Summarization - an additional descriptive task that involves methods for finding a compact description for a set (or subset) of data.
5. Dependency Modeling - finding a local model that describes significant dependencies between variables or between the values of a feature in a data set or in a part of a data set.
6. Change and Deviation Detection - discovering the most significant changes in the data set.
The more formal approach, with graphical interpretation of data-mining tasks for complex and large data sets and illustrative examples, is given in Chapter 4. Current introductory classifications and definitions are given here only to give the reader a feeling of the wide spectrum of problems and tasks that may be solved using data-mining technology.
The success of a data-mining engagement depends largely on the amount of energy, knowledge, and creativity that the designer puts into it. In essence, data mining is like solving a puzzle. The individual pieces of the puzzle are not complex structures in and of themselves. Taken as a collective whole, however, they can constitute very elaborate systems. As you try to unravel these systems, you will probably get frustrated, start forcing parts together, and generally become annoyed at the entire process; but once you know how to work with the pieces, you realize that it was not really that hard in the first place. The same analogy can be applied to data mining. In the beginning, the designers of the data-mining process probably do not know much about the data sources; if they did, they would most likely not be interested in performing data mining. Individually, the data seem simple, complete, and explainable. But collectively, they take on a whole new appearance that is intimidating and difficult to comprehend, like the puzzle. Therefore, being an analyst and designer in a data-mining process requires, besides thorough professional knowledge, creative thinking and a willingness to see problems in a different light.
Data mining is one of the fastest growing fields in the computer industry. Once a small interest area within computer science and statistics, it has quickly expanded into a field of its own. One of the greatest strengths of data mining is reflected in its wide range of methodologies and techniques that can be applied to a host of problem sets. Since data mining is a natural activity to be performed on large data sets, one of the largest target markets is the entire data warehousing, data-mart, and decision-support community, encompassing professionals from such industries as retail, manufacturing, telecommunications, healthcare, insurance, and transportation. In the business community, data mining can be used to discover new purchasing trends, plan investment strategies, and detect unauthorized expenditures in the accounting system. It can improve marketing campaigns and the outcomes can be used to provide customers with more focused support and attention. Data-mining techniques can be applied to problems of business process reengineering, in which the goal is to understand interactions and relationships among business practices and organizations.
Many law enforcement and special investigative units, whose mission is to identify fraudulent activities and discover crime trends, have also used data mining successfully. For example, these methodologies can aid analysts in the identification of critical behavior patterns in the communication interactions of narcotics organizations, the monetary transactions of money laundering and insider trading operations, the movements of serial killers, and the targeting of smugglers at border crossings. Data-mining techniques have also been employed by people in the intelligence community who maintain many large data sources as a part of the activities relating to matters of national security. Appendix B of the book gives a brief overview of typical commercial applications of data-mining technology today.
1.2 DATA-MINING ROOTS
Looking at how different authors describe data mining, it is clear that we are far from a universal agreement on the definition of data mining or even what constitutes data mining. Is data mining a form of statistics enriched with learning theory or is it a revolutionary new concept? In our view, most data-mining problems and corresponding solutions have roots in classical data analysis. Data mining has its origins in various disciplines, of which the two most important are statistics and machine learning. Statistics has its roots in mathematics, and therefore, there has been an emphasis on mathematical rigor, a desire to establish that something is sensible on theoretical grounds before testing it in practice. In contrast, the machine-learning community has its origins very much in computer practice. This has led to a practical orientation, a willingness to test something out to see how well it performs, without waiting for a formal proof of effectiveness.
If the place given to mathematics and formalizations is one of the major differences between statistical and machine-learning approaches to data mining, another is in the relative emphasis they give to models and algorithms. Modern statistics is almost entirely driven by the notion of a model. This is a postulated structure, or an approximation to a structure, which could have led to the data. In place of the statistical emphasis on models, machine learning tends to emphasize algorithms. This is hardly surprising; the very word "learning" contains the notion of a process, an implicit algorithm.
Basic modeling principles in data mining also have roots in control theory, which is primarily applied to engineering systems and industrial processes. The problem of determining a mathematical model for an unknown system (also referred to as the target system) by observing its input-output data pairs is generally referred to as system identification. The purposes of system identification are multiple and, from a standpoint of data mining, the most important are to predict a system's behavior and to explain the interaction and relationships between the variables of a system.
System identification generally involves two top-down steps:
1. Structure identification - In this step, we need to apply a priori knowledge about the target system to determine a class of models within which the search for the most suitable model is to be conducted. Usually this class of models is denoted by a parametrized function y = f(u,t), where y is the model's output, u is an input vector, and t is a parameter vector. The determination of the function f is problem-dependent, and the function is based on the designer's experience, intuition, and the laws of nature governing the target system.
2. Parameter identification - In the second step, when the structure of the model is known, all we need to do is apply optimization techniques to determine parameter vector t such that the resulting model y = f(u,t) can describe the system appropriately.
In general, system identification is not a one-pass process: both structure and parameter identification need to be done repeatedly until a satisfactory model is found. This iterative process is represented graphically in Figure 1.1. Typical steps in every iteration are as follows:
1. Specify and parametrize a class of formalized (mathematical) models, y* = f(u,t), representing the system to be identified.
2. Perform parameter identification to choose the parameters that best fit the available data set (the difference y - y is minimal).
3. Conduct validation tests to see if the model identified responds correctly to an unseen data set (often referred as test, validating, or checking data set).
4. Terminate the process once the results of the validation test are satisfactory.
If we do not have any a priori knowledge about the target system, then structure identification becomes difficult, and we have to select the structure by trial and error. While we know a great deal about the structures of most engineering systems and industrial processes, in a vast majority of target systems where we apply data-mining techniques, these structures are totally unknown, or they are so complex that it is impossible to obtain an adequate mathematical model. Therefore, new techniques were developed for parameter identification and they are today a part of the spectra of data-mining techniques.
Finally, we can distinguish between how the terms "model" and "pattern" are interpreted in data mining. A model is a "large scale" structure, perhaps summarizing relationships over many (sometimes all) cases, whereas a pattern is a local structure, satisfied by few cases or in a small region of a data space. It is also worth noting here that the word "pattern", as it is used in pattern recognition, has a rather different meaning for data mining. In pattern recognition it refers to the vector of measurements characterizing a particular object, which is a point in a multidimensional data space. In data mining, a pattern is simply a local model. In this book we refer to n-dimensional vectors of data as samples.
Excerpted from Data Mining by Mehmed Kantardzic Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
Preface to the Second Edition xiii
Preface to the First Edition xv
1 DATA-MINING CONCEPTS 1
1.1 Introduction 1
1.2 Data-Mining Roots 4
1.3 Data-Mining Process 6
1.4 Large Data Sets 9
1.5 Data Warehouses for Data Mining 14
1.6 Business Aspects of Data Mining: Why a Data-Mining Project Fails 17
1.7 Organization of This Book 21
1.8 Review Questions and Problems 23
1.9 References for Further Study 24
2 PREPARING THE DATA 26
2.1 Representation of Raw Data 26
2.2 Characteristics of Raw Data 31
2.3 Transformation of Raw Data 33
2.4 Missing Data 36
2.5 Time-Dependent Data 37
2.6 Outlier Analysis 41
2.7 Review Questions and Problems 48
2.8 References for Further Study 51
3 DATA REDUCTION 53
3.1 Dimensions of Large Data Sets 54
3.2 Feature Reduction 56
3.3 Relief Algorithm 66
3.4 Entropy Measure for Ranking Features 68
3.5 PCA 70
3.6 Value Reduction 73
3.7 Feature Discretization: ChiMerge Technique 77
3.8 Case Reduction 80
3.9 Review Questions and Problems 83
3.10 References for Further Study 85
4 LEARNING FROM DATA 87
4.1 Learning Machine 89
4.2 SLT 93
4.3 Types of Learning Methods 99
4.4 Common Learning Tasks 101
4.5 SVMs 105
4.6 kNN: Nearest Neighbor Classifi er 118
4.7 Model Selection versus Generalization 122
4.8 Model Estimation 126
4.9 90% Accuracy: Now What? 132
4.10 Review Questions and Problems 136
4.11 References for Further Study 138
5 STATISTICAL METHODS 140
5.1 Statistical Inference 141
5.2 Assessing Differences in Data Sets 143
5.3 Bayesian Inference 146
5.4 Predictive Regression 149
5.5 ANOVA 155
5.6 Logistic Regression 157
5.7 Log-Linear Models 158
5.8 LDA 162
5.9 Review Questions and Problems 164
5.10 References for Further Study 167
6 DECISION TREES AND DECISION RULES 169
6.1 Decision Trees 171
6.2 C4.5 Algorithm: Generating a Decision Tree 173
6.3 Unknown Attribute Values 180
6.4 Pruning Decision Trees 184
6.5 C4.5 Algorithm: Generating Decision Rules 185
6.6 CART Algorithm & Gini Index 189
6.7 Limitations of Decision Trees and Decision Rules 192
6.8 Review Questions and Problems 194
6.9 References for Further Study 198
7 ARTIFICIAL NEURAL NETWORKS 199
7.1 Model of an Artifi cial Neuron 201
7.2 Architectures of ANNs 205
7.3 Learning Process 207
7.4 Learning Tasks Using ANNs 210
7.5 Multilayer Perceptrons (MLPs) 213
7.6 Competitive Networks and Competitive Learning 221
7.7 SOMs 225
7.8 Review Questions and Problems 231
7.9 References for Further Study 233
8 ENSEMBLE LEARNING 235
8.1 Ensemble-Learning Methodologies 236
8.2 Combination Schemes for Multiple Learners 240
8.3 Bagging and Boosting 241
8.4 AdaBoost 243
8.5 Review Questions and Problems 245
8.6 References for Further Study 247
9 CLUSTER ANALYSIS 249
9.1 Clustering Concepts 250
9.2 Similarity Measures 253
9.3 Agglomerative Hierarchical Clustering 259
9.4 Partitional Clustering 263
9.5 Incremental Clustering 266
9.6 DBSCAN Algorithm 270
9.7 BIRCH Algorithm 272
9.8 Clustering Validation 275
9.9 Review Questions and Problems 275
9.10 References for Further Study 279
10 ASSOCIATION RULES 280
10.1 Market-Basket Analysis 281
10.2 Algorithm Apriori 283
10.3 From Frequent Itemsets to Association Rules 285
10.4 Improving the Effi ciency of the Apriori Algorithm 286
10.5 FP Growth Method 288
10.6 Associative-Classifi cation Method 290
10.7 Multidimensional Association–Rules Mining 293
10.8 Review Questions and Problems 295
10.9 References for Further Study 298
11 WEB MINING AND TEXT MINING 300
11.1 Web Mining 300
11.2 Web Content, Structure, and Usage Mining 302
11.3 HITS and LOGSOM Algorithms 305
11.4 Mining Path–Traversal Patterns 310
11.5 PageRank Algorithm 313
11.6 Text Mining 316
11.7 Latent Semantic Analysis (LSA) 320
11.8 Review Questions and Problems 324
11.9 References for Further Study 326
12 ADVANCES IN DATA MINING 328
12.1 Graph Mining 329
12.2 Temporal Data Mining 343
12.3 Spatial Data Mining (SDM) 357
12.4 Distributed Data Mining (DDM) 360
12.5 Correlation Does Not Imply Causality 369
12.6 Privacy, Security, and Legal Aspects of Data Mining 376
12.7 Review Questions and Problems 381
12.8 References for Further Study 382
13 GENETIC ALGORITHMS 385
13.1 Fundamentals of GAs 386
13.2 Optimization Using GAs 388
13.3 A Simple Illustration of a GA 394
13.4 Schemata 399
13.5 TSP 402
13.6 Machine Learning Using GAs 404
13.7 GAs for Clustering 409
13.8 Review Questions and Problems 411
13.9 References for Further Study 413
14 FUZZY SETS AND FUZZY LOGIC 414
14.1 Fuzzy Sets 415
14.2 Fuzzy-Set Operations 420
14.3 Extension Principle and Fuzzy Relations 425
14.4 Fuzzy Logic and Fuzzy Inference Systems 429
14.5 Multifactorial Evaluation 433
14.6 Extracting Fuzzy Models from Data 436
14.7 Data Mining and Fuzzy Sets 441
14.8 Review Questions and Problems 443
14.9 References for Further Study 445
15 VISUALIZATION METHODS 447
15.1 Perception and Visualization 448
15.2 Scientifi c Visualization and
Information Visualization 449
15.3 Parallel Coordinates 455
15.4 Radial Visualization 458
15.5 Visualization Using Self-Organizing Maps (SOMs) 460
15.6 Visualization Systems for Data Mining 462
15.7 Review Questions and Problems 467
15.8 References for Further Study 468
Appendix A 470
A.1 Data-Mining Journals 470
A.2 Data-Mining Conferences 473
A.3 Data-Mining Forums/Blogs 477
A.4 Data Sets 478
A.5 Comercially and Publicly Available Tools 480
A.6 Web Site Links 489
Appendix B: Data-Mining Applications 496
B.1 Data Mining for Financial Data Analysis 496
B.2 Data Mining for the Telecomunications Industry 499
B.3 Data Mining for the Retail Industry 501
B.4 Data Mining in Health Care and Biomedical Research 503
B.5 Data Mining in Science and Engineering 506
B.6 Pitfalls of Data Mining 509