Science in the Age of Computer Simulation

Science in the Age of Computer Simulation

by Eric Winsberg
ISBN-10:
0226902048
ISBN-13:
9780226902043
Pub. Date:
10/30/2010
Publisher:
University of Chicago Press
ISBN-10:
0226902048
ISBN-13:
9780226902043
Pub. Date:
10/30/2010
Publisher:
University of Chicago Press
Science in the Age of Computer Simulation

Science in the Age of Computer Simulation

by Eric Winsberg
$34.0 Current price is , Original price is $34.0. You
$34.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Computer simulation was first pioneered as a scientific tool in meteorology and nuclear physics in the period following World War II, but it has grown rapidly to become indispensible in a wide variety of scientific disciplines, including astrophysics, high-energy physics, climate science, engineering, ecology, and economics. Digital computer simulation helps study phenomena of great complexity, but how much do we know about the limits and possibilities of this new scientific practice? How do simulations compare to traditional experiments? And are they reliable? Eric Winsberg seeks to answer these questions in Science in the Age of Computer Simulation.

Scrutinizing these issue with a philosophical lens, Winsberg explores the impact of simulation on such issues as the nature of scientific evidence; the role of values in science; the nature and role of fictions in science; and the relationship between simulation and experiment, theories and data, and theories at different levels of description. Science in the Age of Computer Simulation will transform many of the core issues in philosophy of science, as well as our basic understanding of the role of the digital computer in the sciences.


Product Details

ISBN-13: 9780226902043
Publisher: University of Chicago Press
Publication date: 10/30/2010
Pages: 168
Product dimensions: 6.00(w) x 9.00(h) x 0.60(d)

About the Author

Eric Winsberg is associate professor of philosophy at the University of South Florida.

Read an Excerpt

Science in the Age of Computer Simulation


By ERIC B. WINSBERG

The University of Chicago Press

Copyright © 2010 The University of Chicago
All right reserved.

ISBN: 978-0-226-90204-3


Chapter One

Introduction

Major developments in the history of the philosophy of science have always been driven by major developments in the sciences. The most famous examples, of course, are the revolutionary changes in physics at the beginning of the twentieth century that inspired the logical positivists of the Vienna Circle. But there are many others. Kant's conception of synthetic a priori knowledge was originally intended to address the new mechanics of Newton. The rise of non-Euclidean geometries in the nineteenth century led to Helmholtz's revised formulation of transcendentalism, as well as, more famously, to Poincaré's defense of conventionalism. The rise of atomic theory in the nineteenth century and the ensuing skepticism about the genuine existence of atoms, to raise one final example, played a large role in igniting and fueling debates about scientific realism that continue to rage today.

Over the last fifty years, however, there has been a revolutionary development affecting almost all of the sciences that, at least until very recently, has been largely ignored by philosophers of science. The development I am speaking of is the astonishing growth, in almost all of the sciences, of the use of the digital computer to study phenomena of great complexity-the rise of computer simulations. More and more scientific "experiments" are, to use the vernacular of the day, being carried out "in silico."

It is certainly true that, historically, most of the famous scientific developments that have had an impact on the philosophy of science have involved revolutionary changes at the level of fundamental theory. It is also true that the use of computer simulation to study complex phenomena usually occurs against a backdrop of well-established basic theory, rather than in the process of altering, let alone revolutionizing, such theory. But surely there is no reason to think that it is only changes in basic theory that should be of interest to philosophers. Surely there is no reason to think that new experimental methods, new research technologies, or innovative ways of solving new sets of problems within existing theory could not have a similar impact on philosophy. It is not altogether unlikely that some of the major accomplishments in the physical sciences to come in the near future will have as much to do with modeling complex phenomena within existing theories as with developing novel fundamental theories.

That, in a nutshell, is the basic sentiment that motivates this book: that the last part of the twentieth century has been, and the twenty-first century is likely to continue to be, the age of computer simulation. This has been an era in which, at least in the physical sciences, and to a large degree elsewhere, major developments in fundamental theory have been slow to come, but there has been an avalanche of novel applications of existing theory-an avalanche aided in no small part by our increasing ability to use the digital computer to build tractable models of greater and greater complexity, using the same available theoretical resources. The book is motivated as well by the conviction that the philosophy of science should continue, as it always has in the past, to respond to the character of the science of its own era. This book, therefore, is about computer simulation and the philosophy of science; and it is as much about what philosophers of science should learn in the age of simulation as it is about what philosophy can contribute to our understanding of how the digital computer is transforming science.

Science and Its Applications

General philosophy of science concerns itself with a diverse set of issues: the nature of scientific evidence, the nature and scope of scientific theories; the relations between theories at different levels of description; the relationship between theories on the one hand and local descriptions of phenomena on the other; the role that various kinds of models play in mediating those relationships; the nature of scientific explanation; and the issue of scientific realism, just to name a few. Our understanding of these topics, I will argue in this book, could greatly profit from a close look at examples of scientific practice where computer simulation plays a prominent role. There are also new topics that can arise for the philosophy of science, topics that have specifically to do with simulation but are of a distinctly philosophical character. I will tackle some of these in this book: What is the relationship between computer simulation, or simulation generally, and experiment? Under what conditions should we expect a computer simulation to be reliable? How can we evaluate a simulation model when the predictions made by such a model are precisely about those phenomena for which data are sparse? What role do deliberately false assumptions play in the construction of simulation models?

Let us begin with one of the oldest topics in the philosophy of science-the nature of scientific evidence. Computer simulations are involved in the creation and justification of scientific knowledge claims, and the problem of the nature of scientific evidence in the philosophy of science is precisely the concern with saying when we do, or don't, have evidence that such claims to knowledge are justified. But simulations more often involve the application rather than the testing of scientific theories. And so the epistemology of simulation is a topic that is quite unfamiliar to most philosophy of science, which has traditionally concerned itself with the justification of theories, not with their application. An appropriately subtle understanding of the epistemology of simulation requires that we rethink the relationship between theories and local descriptions of phenomena.

The rethinking required dovetails nicely, moreover, with recent debates in the philosophy of science about the scope of theories. According to one side in this debate, laws and theories in science are tightly restricted with respect to the features of the world that fall under their domain. The other side maintains that fundamental theories by their nature have universal domains. Few of the simulations considered in this book have much to do with fundamental theory, and so that precise debate will not concern us directly. But there is a related question that the epistemology of simulation must confront: Does the principled scope of every theory extend as far as all of its less-than-principled applications? More concretely, when simulationists use a particular theory to guide the construction of their simulations, is it necessarily the case that their results are, properly speaking, part of the "empirical content" of those theories? This is an important question both for the general philosopher of science interested in the nature of scientific theories and, as we shall see, for anyone interested in the epistemology of simulation. To get a clearer view of these issues, we must look at some of the details of computer simulation methods.

A Brief History

Computer simulation is a method for studying complex systems that has had applications in almost every field of scientific study-from quantum chemistry to the study of traffic-flow patterns. Its history is as long as the history of the digital computer itself, and it begins in the United States during the Second World War. When the physicist John Mauchly visited the Ballistic Research Laboratory at Aberdeen and saw an army of women calculating firing tables on mechanical calculators, he suggested that the laboratory begin work on a digital computer. The Electrical Numerical Integrator and Computer (ENIAC), the first truly programmable digital computer, was born in 1945. John von Neumann took an immediate interest in it and, encouraged by fellow Hungarian-American physicist Edward Teller, he enlisted the help of Nicholas Metropolis and Stanislaw Ulam to begin work on a computational model of a thermonuclear reaction.

Their effort was typical of computer simulation techniques. They began with a mathematical model depicting the time-evolution of the system being studied in terms of equations, or rules-of-evolution, for the variables of the model. The model was constructed (as is typical in the physical sciences) from a mixture of well-established theoretical principles, some physical insight, and some clever mathematical tricks. They then transformed the model into a computable algorithm, and the evolution of the computer was said to "simulate" the evolution of the system in question.

The war ended before von Neumann's project was completed, but its eventual success persuaded Teller, von Neumann, and Enrico Fermi of the feasibility of a hydrogen bomb. It also convinced the military high brass of the practicability of electronic computation. Soon after, meteorology joined the ranks of weapons research as one of the first disciplines to make use of the computer. Von Neumann was convinced early on that hydrodynamics was very important to physics and that its development would require vast computational resources. He also became convinced that it would be strategic to enlist meteorologists, with the resources at their disposal, as allies. In 1946, he launched the Electronic Computer Project at the Institute for Advanced Study at Princeton University and chose numerical meteorology as one of its largest projects. While working on the problem of simulating simplified weather systems, meteorologist and mathematician Edward Lorentz discovered a simple model that displayed characteristics now called "sensitive dependence on initial conditions" and "strange attractors," the hallmarks of a system well described by "chaos theory," a field he helped to create.

In the last forty-some years, simulations have proliferated in the sciences, and an enormous variety of techniques have been developed. In the physical sciences, two classes of simulations predominate: continuum methods and particle methods. Continuum methods begin by describing their object of study as a medium described by fields and distributions-a continuum. The goal is then to give differential equations that relate the rates of change of the values for these fields and distributions, and to use "discretization" techniques to transform these continuous differential equations into algebraic expressions that can be calculated step-by-step by a computer. Particle methods describe their objects of study either as a collection of nuclei and electrons or as a collection of atoms and molecules-the former only if quantum methods are employed.

From a certain point of view, these are methods for overcoming merely practical limitations in our abilities to solve the equations provided by our best theories-theories like fluid mechanics, quantum mechanics, and classical molecular dynamics. Why should methods for overcoming practical limitations be of interest to philosophers? Philosophers of science are accustomed to centering the attention they devote to scientific theories on a cluster of canonical issues: What are theories? How are they confirmed? How should we interpret them? They tend to think that all of the philosophically interesting action takes place around that cluster-that what matters to philosophy is the nature of theories in principle, not what we are merely limited to doing with them in practice. The practical obstacles that need to be overcome when we work with theories can strike the philosopher as mundane.

This is as good a place as any to point out a methodological presupposition that prevails in most of this book. As I said above, I believe that philosophers of science have missed an opportunity to contribute to this explosive area of modern science precisely because they have had a prejudice for being concerned with what is possible in principle rather than with what we can achieve in practice. Accordingly, I focus on the current state of practice of computer simulation, rather than on what we might think might, in principle, someday be possible. When I say in what follows, for example, that we cannot do P, I often mean that the prospect of doing P is presently intractable, not that proofs exist of the fact that P is impossible in principle. One might wonder whether or not it is sensible to draw philosophical conclusions from present practical difficulties-and the methods by which they are overcome-instead of focusing on what is possible or impossible in principle.

This is indeed a worry, but I do not believe it should prevent us from getting started on the difficult philosophical work of examining the present state of the art. It is true that some of the practical obstacles I discuss in what follows may someday be overcome. We may find more principled solutions to some of the problems for which we now apply less principled approaches. But I doubt this will mean that the lessons we draw from studying those less principled approaches will lose their take-home value when they are replaced. That is because I doubt we will ever reach a point where all scientific problems will have theoretically principled solutions. Scientific practice will surely evolve, but we will always be pushing the envelope of the set of problems we want to tackle with existing theory, and new practical difficulties will arise as old ones disappear. More important, philosophers should not allow the present state of flux in the computationally intensive sciences to prevent them from paying close enough attention to where most of the action has been in recent science: the unprincipled solutions of just these sorts of practical difficulties. This is a part of scientific practice that is responsible for more and more of the creation of knowledge in science and one that is ripe for philosophical attention.

And we should make no mistake about it-simulation is a process of knowledge creation, and one in which epistemological issues loom large. So the first thing that I want to do here is to convince you that simulation is in fact a deeply creative source of scientific knowledge, and to give a taste of its complex and motley character.

The second thing I want to do is to argue that the complex and motley nature of this epistemology suggests that the end results of simulations often do not bear a simple, straightforward relation to the theoretical backgrounds from which they arise. Accordingly, I want to urge philosophers of science to examine more carefully the process by which general theories are applied. It is a relatively neglected aspect of scientific practice, but it plays a role that is often as crucial, as complex, and as creative as the areas of science philosophers have traditionally studied: theorizing and experimenting.

Chapter Two

Sanctioning Models: Theories and Their Scope

Let us suppose that we are confronted with a physical system of which we would like to gain a better understanding: a severe storm, a gas jet, or the turbulent flow of water in a basin. The system in question is made up of certain underlying components that fall under the domain of some basic theoretical principles. We begin by making two assumptions: we know what the physical components of the system are and how they are arranged, and we have great confidence in those principles.

The assumptions we have made so far about our system often allow us to write down a set of partial differential equations. In principal, such differential equations give us a great deal of information about the system. The problem is that when these underlying components of the system-whether they be solid particles, parcels of fluid, or other constitutive features-interact as we suppose they do in our physical system, the differential equations take on an unfortunate property. In the types of systems with which the simulation modeler is concerned, it is mathematically impossible to find an analytic solution to these equations-the model given by the equations is said to be analytically intractable. In other words, it is impossible to write down a set of closed-form equations. A closed-form solution to a set of differential equations is a set of equations that are given in terms of known mathematical functions and whose partial derivatives are exactly the set of differential equations we are interested in.

These problems are not new to the computer age, nor are attempts to overcome them. In the past, modelers have focused their attempts on analytic techniques for finding approximate solutions to the differential equations in question. In many instances, they have successfully used these techniques to generate closed-form functions that are approximately valid. That is, for problem situations such as the three-body problem, modelers have found functions that can be shown to have the same qualitative character as the unknown solution to the equations to be solved.

(Continues...)



Excerpted from Science in the Age of Computer Simulation by ERIC B. WINSBERG Copyright © 2010 by The University of Chicago. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Acknowledgments 

1   Introduction 
2   Sanctioning Models: Theories and Their Scope   1
3   Methodology for a Virtual World   
4   A Tale of Two Methods   
5   When Theories Shake Hands   
6   Models of Climate: Values and Uncertainties   
7   Reliability without Truth   
8   Conclusion   

References   
Index
From the B&N Reads Blog

Customer Reviews