Read an Excerpt
  Data Mining 
 Practical Machine Learning Tools and Techniques 
 By Ian H. Witten  Eibe Frank  Mark A. Hall  MORGAN KAUFMANN 
 Copyright © 2011   Elsevier Inc. 
All right reserved.  ISBN: 978-0-08-089036-4 
    Chapter One 
  What's It All About?    
  Human in vitro fertilization involves collecting several eggs from a woman's ovaries,  which, after fertilization with partner or donor sperm, produce several embryos.  Some of these are selected and transferred to the woman's uterus. The challenge is  to select the "best" embryos to use—the ones that are most likely to survive. Selection  is based on around 60 recorded features of the embryos—characterizing their  morphology, oocyte, and follicle, and the sperm sample. The number of features is  large enough to make it difficult for an embryologist to assess them all simultaneously  and correlate historical data with the crucial outcome of whether that embryo  did or did not result in a live child. In a research project in England, machine learning  has been investigated as a technique for making the selection, using historical  records of embryos and their outcome as training data.  
     Every year, dairy farmers in New Zealand have to make a tough business decision:  which cows to retain in their herd and which to sell off to an abattoir. Typically,  one-fifth of the cows in a dairy herd are culled each year near the end of the milking  season as feed reserves dwindle. Each cow's breeding and milk production history  influences this decision. Other factors include age (a cow nears the end of its productive  life at eight years), health problems, history of difficult calving, undesirable  temperament traits (kicking or jumping fences), and not being pregnant with calf  for the following season. About 700 attributes for each of several million cows have  been recorded over the years. Machine learning has been investigated as a way of  ascertaining what factors are taken into account by successful farmers—not to  automate the decision but to propagate their skills and experience to others.  
     Life and death. From Europe to the Antipodes. Family and business. Machine  learning is a burgeoning new technology for mining knowledge from data, a  technology that a lot of people are starting to take seriously.  
  
  1.1 DATA MINING AND MACHINE LEARNING  
  We are overwhelmed with data. The amount of data in the world and in our lives  seems ever-increasing—and there's no end in sight. Omnipresent computers make  it too easy to save things that previously we would have trashed. Inexpensive disks  and online storage make it too easy to postpone decisions about what to do with all  this stuff—we simply get more memory and keep it all. Ubiquitous electronics  record our decisions, our choices in the supermarket, our financial habits, our  comings and goings. We swipe our way through the world, every swipe a record in  a database. The World Wide Web (WWW) overwhelms us with information; meanwhile,  every choice we make is recorded. And all of these are just personal choices—they   have countless counterparts in the world of commerce and industry. We could  all testify to the growing gap between the generation of data and our understanding  of it. As the volume of data increases, inexorably, the proportion of it that people  understand decreases alarmingly. Lying hidden in all this data is information—potentially   useful information—that is rarely made explicit or taken advantage of.  
     This book is about looking for patterns in data. There is nothing new about this.  People have been seeking patterns in data ever since human life began. Hunters seek  patterns in animal migration behavior, farmers seek patterns in crop growth, politicians  seek patterns in voter opinion, and lovers seek patterns in their partners'  responses. A scientist's job (like a baby's) is to make sense of data, to discover the  patterns that govern how the physical world works and encapsulate them in theories  that can be used for predicting what will happen in new situations. The entrepreneur's  job is to identify opportunities—that is, patterns in behavior that can be turned  into a profitable business—and exploit them.  
     In data mining, the data is stored electronically and the search is automated—or  at least augmented—by computer. Even this is not particularly new. Economists,  statisticians, forecasters, and communication engineers have long worked with the  idea that patterns in data can be sought automatically, identified, validated, and used  for prediction. What is new is the staggering increase in opportunities for finding  patterns in data. The unbridled growth of databases in recent years, databases for  such everyday activities as customer choices, brings data mining to the forefront of  new business technologies. It has been estimated that the amount of data stored in  the world's databases doubles every 20 months, and although it would surely be  difficult to justify this figure in any quantitative sense, we can all relate to the pace  of growth qualitatively. As the flood of data swells and machines that can undertake  the searching become commonplace, the opportunities for data mining increase. As  the world grows in complexity, overwhelming us with the data it generates, data  mining becomes our only hope for elucidating hidden patterns. Intelligently analyzed  data is a valuable resource. It can lead to new insights, and, in commercial settings,  to competitive advantages.  
     Data mining is about solving problems by analyzing data already present in  databases. Suppose, to take a well-worn example, the problem is fickle customer  loyalty in a highly competitive marketplace. A database of customer choices, along  with customer profiles, holds the key to this problem. Patterns of behavior of former  customers can be analyzed to identify distinguishing characteristics of those likely  to switch products and those likely to remain loyal. Once such characteristics are  found, they can be put to work to identify present customers who are likely to jump  ship. This group can be targeted for special treatment, treatment too costly to apply  to the customer base as a whole. More positively, the same techniques can be used  to identify customers who might be attracted to another service the enterprise provides,  one they are not presently enjoying, to target them for special offers that  promote this service. In today's highly competitive, customer-centered, service-oriented  economy, data is the raw material that fuels business growth—if only it can  be mined.  
     Data mining is defined as the process of discovering patterns in data. The process  must be automatic or (more usually) semiautomatic. The patterns discovered must  be meaningful in that they lead to some advantage, usually an economic one. The  data is invariably present in substantial quantities.  
     And how are the patterns expressed? Useful patterns allow us to make nontrivial  predictions on new data. There are two extremes for the expression of a pattern: as  a black box whose innards are effectively incomprehensible, and as a transparent  box whose construction reveals the structure of the pattern. Both, we are assuming,  make good predictions. The difference is whether or not the patterns that are mined  are represented in terms of a structure that can be examined, reasoned about, and  used to inform future decisions. Such patterns we call structural because they  capture the decision structure in an explicit way. In other words, they help to explain  something about the data.  
     Now, again, we can say what this book is about: It is about techniques for finding  and describing structural patterns in data. Most of the techniques that we cover have  developed within a field known as machine learning. But first let us look at what  structural patterns are.  
  
  Describing Structural Patterns  
  What is meant by structural patterns? How do you describe them? And what form  does the input take? We will answer these questions by way of illustration rather  than by attempting formal, and ultimately sterile, definitions. There will be plenty  of examples later in this chapter, but let's examine one right now to get a feeling  for what we're talking about.  
     Look at the contact lens data in Table 1.1. It gives the conditions under which  an optician might want to prescribe soft contact lenses, hard contact lenses, or no  contact lenses at all; we will say more about what the individual features mean later.  Each line of the table is one of the examples. Part of a structural description of this  information might be as follows:  
  If tear production rate = reduced then recommendation = none  Otherwise, if age = young and astigmatic = no then  recommendation = soft  
  
  Structural descriptions need not necessarily be couched as rules such as these. Decision  trees, which specify the sequences of decisions that need to be made along with  the resulting recommendation, are another popular means of expression.  
     This example is a very simplistic one. For a start, all combinations of possible  values are represented in the table. There are 24 rows, representing three possible  values of age and two values each for spectacle prescription, astigmatism, and tear  production rate (3 × 2 × 2 × 2 = 24). The rules do not really generalize from the  data; they merely summarize it. In most learning situations, the set of examples given  as input is far from complete, and part of the job is to generalize to other, new  examples. You can imagine omitting some of the rows in the table for which the tear  production rate is reduced and still coming up with the rule  
  If tear production rate = reduced then recommendation = none  
  This would generalize to the missing rows and fill them in correctly. Second, values  are specified for all the features in all the examples. Real-life datasets invariably  contain examples in which the values of some features, for some reason or other,  are unknown—for example, measurements were not taken or were lost. Third, the  preceding rules classify the examples correctly, whereas often, because of errors or  noise in the data, misclassifications occur even on the data that is used to create the  classifier.  
  
  Machine Learning  
  Now that we have some idea of the inputs and outputs, let's turn to machine learning.  What is learning, anyway? What is machine learning? These are philosophical  questions, and we will not be too concerned with philosophy in this book; our  emphasis is firmly on the practical. However, it is worth spending a few moments  at the outset on fundamental issues, just to see how tricky they are, before rolling  up our sleeves and looking at machine learning in practice.  
     Our dictionary defines "to learn" as  
   To get knowledge of something by study, experience, or being taught.  
   To become aware by information or from observation  
   To commit to memory  
   To be informed of or to ascertain  
   To receive instruction  
  
  These meanings have some shortcomings when it comes to talking about computers.  For the first two, it is virtually impossible to test whether learning has been achieved  or not. How do you know whether a machine has got knowledge of something? You  probably can't just ask it questions; even if you could, you wouldn't be testing its  ability to learn but its ability to answer questions. How do you know whether it has  become aware of something? The whole question of whether computers can be  aware, or conscious, is a burning philosophical issue.  
     As for the last three meanings, although we can see what they denote in human  terms, merely committing to memory and receiving instruction seem to fall far short  of what we might mean by machine learning. They are too passive, and we know  that computers find these tasks trivial. Instead, we are interested in improvements  in performance, or at least in the potential for performance, in new situations. You  can commit something to memory or be informed of something by rote learning  without being able to apply the new knowledge to new situations. In other words,  you can receive instruction without benefiting from it at all.  
  (Continues...)  
  
  
     
 
 Excerpted from Data Mining by Ian H. Witten  Eibe Frank  Mark A. Hall  Copyright © 2011   by Elsevier Inc. .   Excerpted by permission of MORGAN KAUFMANN. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.