Computer Ethics / Edition 3

Paperback (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 1-2 business days
(Save 96%)
Other sellers (Paperback)
  • All (32) from $1.99   
  • New (3) from $7.65   
  • Used (29) from $1.99   
Close
Sort by
Page 1 of 1
Showing All
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$7.65
Seller since 2008

Feedback rating:

(90)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
2000 Paperback New Ships Fast! Satisfaction Guaranteed!

Ships from: Skokie, IL

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
$60.00
Seller since 2014

Feedback rating:

(162)

Condition: New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
$60.00
Seller since 2014

Feedback rating:

(162)

Condition: New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing All
Close
Sort by

Overview

The third edition of Computer Ethics, by Deborah G. Johnson retains the clear writing and general approach of the widely adopted and respected previous editions. Each chapter begins with a short scenario to introduce the topic and make the issue concrete, followed by a lucid analysis of the issue. Each chapter concludes with study questions and suggested further readings.

Author Deborah G. Johnson has updated material throughout this text. Two new chapters on the Internet have been added: one focuses on ethical behavior online, and the other addresses the social implications of the Internet.

Topics covered include:

  • What is computer ethics?
  • What are the philosophical foundations of computer ethics?
  • How does computer ethics relate to professional ethics?
  • Privacy.
  • Property rights.
  • Accountability.

All topics are presented in compelling and understandable language, so that rigorous, in-depth analysis is accessible to students who may be novices in philosophy or technology studies.

Read More Show Less

Product Details

  • ISBN-13: 9780130836991
  • Publisher: Prentice Hall
  • Publication date: 11/15/2000
  • Edition description: Subsequent
  • Edition number: 3
  • Pages: 256
  • Product dimensions: 5.94 (w) x 8.94 (h) x 0.53 (d)

Meet the Author

DEBORAH G. JOHNSON received the ACM SIGCAS 2000 Making A Difference Award for her "significant contribution in providing the philosophical foundation for Computer Ethics." She is a Professor of Philosophy in the School of Public Policy at the Georgia Institute of Technology, where she is also Director of the Program in Philosophy, Science, and Technology.

Read More Show Less

Read an Excerpt

PREFACE:

PREFACE

With the publication of the third edition of Computer Ethics, I am reminded of the day in 1984 when I received the page-proofs of the first edition. I had just returned home from the hospital after having given birth to my daughter. I had composed the book on an Osborne computer using a word processor—I think it was called WordStar—that has been obsolete for more than 10 years now. Today my daughter, now a teenager, is more comfortable with computers than I am. She spends a good deal of her day sitting in front of a computer screen chatting with friends, doing schoolwork, and exploring the Web. I composed this edition of the book on a laptop computer using a version of MS Word that automatically corrected my misspellings and grammar. And, of course, in writing this edition of the book, I frequently went to the Web to look for resources and check references. While I continue to be cautious in making grand pronouncements about the significance of these technological changes for the quality and character of human lives, the changes that have taken place in these 16 years are awe-inspiring.

As I began writing this edition, it was strikingly clear that my primary task was to address the technological changes that have occurred since the second edition, especially the growth and penetration of the Internet into so many domains of life. What are we to make of Web sites, cookies, data mining tools, customized online services, and e-commerce? I have addressed many of these new issues while at the same time holding on to what I continue to believe are the core issues in computer ethics: professional ethics, privacy, property,accountability, and social implications and values. Indeed, you will see that in Chapter 1, 1 continue to struggle with the question at the heart of the field, what is computer ethics? Are the ethical issues surrounding computers unique? What is the connection between ethics and technology?

Contemplating the connection between technology and ethics raises an interesting and important question: Does the field of computer ethics simply follow the development of computer technology? Should computer ethicists simply react to technological developments? Wouldn't it be better if the sequence were reversed so that technological development followed ethics? Historically, the field of computer ethics has been reactive to the technology. As I explain in Chapter 1, new technological developments create new possibilities and the new possibilities need to be evaluated. As in the last edition, I build on the idea in Jim Moor's seminal piece "What Is Computer Ethics?" (1985) that new technologies create policy vacuums. The task of computer ethics, he argues, is to fill these policy vacuums. In a sense, the ethical issues are the policy vacuums, and policy vacuums are created when there is a new development or use of computer technology.

On the other hand, I want to suggest that it would be better if at least some of the movement were in the other direction—technology following ethics. Suppose, that is, we lived in a world where ethicists (or anyone, for that matter) identified potentially unethical situations or arrangements or ethically better possibilities, and engineers and computer scientists went to work designing technologies to change or remedy or improve the situation. I can think of a few examples when this has occurred, but only a few. Arguably, privacy-enhancing technologies and anonymous re-mailers are cases in point. Perhaps freeware and shareware are also examples. For the most part, however, the ethical issues have followed, rather than led, the technology. Here in very broad brushstrokes is my understanding of the evolution of the field of computer ethics, especially in the United States.

HISTORICAL OVERVIEW

In the decades immediately following World War II, ethical concerns were raised about computers, though these concerns were only vaguely expressed and articulated. One of the most salient concerns was that computers threatened our notion of what it means to be human because computers could do the very thing that was considered unique to humans, rational thinking. There was much discussion of artificial intelligence. There was some fear (and fascination with the idea) that computers might take over decision making from humans. I am thinking here of the movie 2001 but the theme also ran through science fiction literature, for example, in Issac Asimov's short stories. Somewhat later, Jim Moor picked up on this theme and wrote an analytical article, "Are There Decisions That Computers Should Never Make?" (1979).

It could be argued that those very early concerns about computers were not exactly ethical in character. For example, no one explicitly argued that it was immoral to go forward with the development of computers because of the threat to our concept of human beings. And the science fiction literature did not suggest that it was immoral to turn over decision-making power to computers. Rather, the implicit argument seemed to be that there would be terrible consequences—possible catastrophes and degradation of human life—were decision making to be turned over to computers.

These concerns did not come from an effect arising from the use of computers; they arose from the mere idea of computers. The very idea of a technology that could think or do something very close to it was threatening to our understanding of what it means to be human.

Ironically, it could be argued that this idea, the idea that computers do what humans do, has turned out to be rich in its influence on human thinking about thinking, rather than a threat. The model of human thought that computers provide has spawned the thriving new field of cognitive science and changed a number of related disciplines. (See for example, Bynum and Moor, 1999.)

In the late 1970s, the ethical issues began to be more clearly articulated in the works of Joseph Weizenbaum (19'79) and Abbe Mowshowitz (1976), and it was in this period that the Privacy Protection Commission did a major study of privacy. The issues that took shape in this period had to do with the threat of big government and large-scale organizations, the related threat to privacy, and concern about the dominance of instrumental rationality. In hindsight, the concern about big government and privacy followed the technology in that in those early days, computers were being used extensively to create and maintain huge databases, databases of a variety of kinds, but especially databases of personal information. Computers were also being used for large numerical calculations. The large-scale calculations were primarily (though not exclusively) for government activities such as weapons development, space travel, and the U.S. census.

The next major technological shift was the development of small computers (microcomputers and personal computers). Attention turned, for a time at least, to the democratizing aspects of computers. Quietly, at the same time, remote access had come on the scene, first as remote access to large mainframes, later as a web of telecommunications connections between small computers.

Attention turned to software and the ethical issues surrounding it. The development and spread of microcomputers brought computer technology visibly and powerfully into the consumer marketplace. Software was recognized as something with enormous market value, and hence, all the ethical issues having to do with property arose. Should software be owned? If so, how? Would current intellectual property law provide adequate protection? Along with property rights issues came issues of liability and responsibility. In the marketplace, if consumers buy and use computers and software, they want to be able to rely on these tools and when something goes wrong, they want to know who to blame or they want to be compensated for their losses.

During this period, the market in computer games took off and it was also during this period that more attention began to focus on hackers. On the one hand, hackers were responding to the commercialization of computing. They did not like the idea of property rights in software. At the same time, those who were acquiring property rights and/or making a business of computing saw the threat posed by hackers, a threat to property rights and to system security.

In the 1990s, attention turned to the Internet. The coming together of computers, telecommunications, and media was the next major development in the technology. The development and expanded use of the Internet brought a seemingly endless set of ethical issues as the Internet came to be used in so many different ways in so many different domains of life. In effect, we are now in a process of transferring and re-creating much of the world into this new medium. At the same time, the Internet also raised all the concerns of the past. Privacy issues are exacerbated on the Internet; the democracy issue came back into play with new claims about the Internet's democratic character; property rights expanded to Web sites and global property rights became ever more important; and so on.

One other technological development that grew slowly during the 1980s and 1990s was the use of computer technology for a wide variety of visualization activities—not just computer graphics and gaming, but simulation activities including medical imagining and scientific models. This development expanded into the idea of virtual reality, an idea that has captivated many. Very quietly and slowly, ethical concerns have been raised about this thrust of computer technology. Unfortunately, I have been able to give only cursory attention to virtual reality issues.

In summary, during the 1960s and 1970s the dominant uses of the technology were for database creation and large-scale calculations. These uses of the technology brought correlated expressions of concern about centralization of power and big government, and threats to personal privacy. During this time, the very idea of computers seemed to threaten the idea of what it means to be human. During the 1980s, microcomputers were developed and made readily available. Remote access to large mainframe computers also became possible. Quietly, the system of telecommunication lines linking computers, that later became the Internet, was expanding and being made available beyond the "inner circle" of developers. Also, the computer/video game industry began to take off. With these developments came correlative concerns about property rights, liability issues, and the threat posed by hackers. In the 1990s, the coming together of telecommunications and computers reached a pinnacle of development and the Internet and the World Wide Web (Web) became widely available. These technological developments are still being assimilated, but they gave rise to a seemingly endless array of ethical issues as well as exacerbating those that were already there.

This is a story of computer ethical issues following technological developments. The question remains whether this pattern is as it should be. As I suggested before, reversing the order would seem to have some advantages, though scholars in the field of computer ethics do not seem to recognize the possibility of leading rather than following the technology. A central focus on the topic of design of computer technology would go a long way toward reversing this pattern. If the designers of technology were to think about the ethical and social implications of their designs before they became a reality, wouldn't the world be a better place!

CHANGES IN THE THIRD EDITION

Readers who are familiar with earlier editions of Computer Ethics will note that in this edition I have added two chapters specifically focused on the Internet, Chapter 4 "Ethics Online" and Chapter 8 "Social Implications and Social Values." The addition of this new material led to other changes in the organization of the book. First, instead of having a separate chapter on crime, abuse, and hacker ethics, I have situated the discussion of hackers and hacker ethics in the first chapter on the Internet, Chapter 4. This placement recognizes that hacking is a phenomenon made possible by the combination of computers and telecommunications lines that we now call the Internet. In 1994 when the second edition was published, the Internet had already been created, but it was far from clear that it would become what it has. Second, instead of having one chapter on the social implications of computer and information technology and another on the social implications of the Internet, I have combined material on the social implications of computer technology from the second edition with new material on the Internet. While I discuss both, the primary focus of Chapter 8 is on the social implications of the Internet and especially its social implications for democracy. I found this approach useful for focusing discussion of the relationships between technology and social change and between values and technology.

As with previous editions, there are many possible paths a reader might take through the book. The topics from chapter to chapter are interconnected, but each chapter has been written to stand essentially alone. When used as a textbook, the path students take through the book should be determined by the type of students being taught, the length of the course, and other books and materials being used in the course. For example, when teaching a class of computer science majors, it is important that the chapter on professional ethics be read early on. This sets students up to think of the issues as part of their professional responsibility. When teaching nonmajors, this chapter can comfortably be read at the end of a course, and can be presented as a way of thinking about how some of the issues discussed in the book might be addressed—control who becomes a computer professional and give computer professionals more responsibility for the effects of their work.

As in the previous editions, I have started each chapter with a set of short scenarios. The scenarios are intended to entice the reader into the topic, to implicitly make the case for the importance of the topic, and to make the topic concrete for those who are impatient with theory. The cases also provide the content for teaching skills in ethical analysis. As before, I have provided study questions and suggested further reading at the end of each chapter.

OVERVIEW

Chapter 1: Introduction: Why Computer Ethics?
In Chapter 1, I make the case for the importance of computer ethics and I explore why computer and information technology raises ethical questions when many other technologies do not. Building on Moor's idea that the task of computer ethics is to fill policy vacuums, I describe generally how computer and information technology gives rise to ethical issues. I push further addressing how these issues can be resolved and explore the traditionalist account which specifies that we can extend ordinary moral principles to situations created by computer technology. This discussion prepares the way for asking in what ways computer ethical issues are unique and in what ways not. As in the last edition, I argue that it is useful to think of the ethical issues surrounding computer and information technology as new species of generic moral issues. I support this idea by arguing that while ethics is always about human action, technology instruments human action and technology makes it possible for individuals and institutions to behave in ways they couldn't behave without technology. Traditional ethics and ethical theories have largely ignored the instrumentation of human action. Computer ethics brings this unexplored area of ethics into focus. I conclude this chapter with a brief discussion of the virtues and dangers of using analogies in analyzing computer ethical issues.

Chapter 2: Philosophical Ethics
This chapter is largely as it was in the second edition though I have added brief descriptions of virtue ethics and John Rawls' theory of justice. As before, the aim of this chapter is to show that ethics is not just a subjective and relativistic enterprise. The aim is to show that ethics and ethical analysis involves giving reasons and making arguments for one's claims and subjecting those reasons and arguments to critical evaluation. By reviewing traditional ethical theories, the chapter provides readers with a useful vocabulary and conceptual tools for thinking through ethical issues. The chapter is not intended to provide the theoretical framework from which answers to all ethical questions can be deduced. Rather, the aim of this chapter is to suggest that ethical analysis is a dialectic process.

Chapter 3: Professional Ethics
The organization of this chapter and the ideas explored are fundamentally the same as in the last edition of Computer Ethics though I have tried to clarify the ideas further, and I have updated the chapter by addressing the issue of licensing of software engineers. I have also recognized recent changes to the ACM code of ethics. The chapter begins with a discussion of how becoming a member of a profession can lead to somewhat special moral rights and responsibilities. That analysis sets the scene for defining profession and professional, and for asking whether computing is a profession. This is followed by a brief discussion of software engineering licensing. The focus of the chapter then turns to the responsibilities of computer professionals, to employers, to clients, to the public, and to co-professionals, and how they come into conflict. The chapter ends with a brief discussion of professional codes of ethics.

Chapter 4: Ethics and the Internet: Ethics Online
I begin this chapter by identifying what is morally significant and distinct about the Internet. Focusing on the Internet as a medium of communication, what seems morally significant is the many-to-many global scope of the Internet, the availability of a certain kind of anonymity, and the reproducibility of the medium. After drawing out the implications of these features, emphasizing the difficulties of accountability and trust, I move on to discuss hacking and hacker ethics. Here I have included some of the material from the previous edition. I conclude the chapter with a discussion of the problems the Internet seems to pose for controlling socially undesirable behavior and for encouraging civil behavior.

Chapter 5: Privacy
Chapter 5 is a combination of old and new material. As in the previous edition, I begin by asking how computer and information technology has changed the collection and distribution of personal information. I describe the traditional way in which the privacy issue has been framed—as necessarily involving a trade-off between individual interests in controlling information and the efficiency and improved decision making of those who can make use of the information. I argue for reframing the issue in a way that recognizes personal privacy not just as an individual good but as a social good, and I try to make clear the importance of privacy for democracy. I conclude the chapter by discussing a variety of possible approaches to improving the protection of personal privacy.

Chapter 6: Property
In the years since I wrote the first edition of Computer Ethics, the property rights issues have gotten more and more complicated. While there is dissatisfaction with current law, in fact, the law has not changed fundamentally. I have come to the conclusion that the most useful approach for an introductory text of this kind is to stay with the fundamentals. Thus, this chapter is very similar to the property rights chapter in the second edition. I begin by describing the problem that ownership of software poses. I describe copyright, trade secrecy, and patent law and the inadequacies each has for protecting computer software. Digging deeper into the problem, I explore the philosophical basis for property rights looking first at the natural rights arguments and then at the utilitarian arguments for and against ownership. I conclude with an argument similar to the one I made in the second edition: Making an illegal copy of proprietary software is immoral because it is illegal. While it is immoral because it is illegal, there are other kinds of immorality that would be immoral even if they were legal. I conclude with a brief discussion of how the Internet is likely to exacerbate property rights issues.

Chapter 7: Accountability
Chapter 7 begins with scenarios that pose a wide range of accountability issues: responsibility for rape in a virtual reality game, accountability when software recommends a decision, liability of Internet Service Providers, and responsibility for the Y2K problem. A discussion of the different meaning and uses of terms such as responsibility, accountability, liability, and blame lays the groundwork for the chapter. The focus then turns to the legal environment for the buying and selling of computer and information technology where the distinction between selling a product and providing a service is pivotal. The remainder of the chapter is devoted to issues that are unique to computer and information technology, especially the diffusion of accountability, the Y2K problem, and Internet issues.

Chapter 8: Ethics and the Internet II: Social Implications and Social Values
Chapter 8 is the second chapter devoted to the Internet. Drawing on material from the second edition, I begin this chapter with a general discussion of technology and social change and identify the pitfalls in asking questions such as, Is computer and information technology causing a social revolution? Is it changing things or reinforcing the status quo? Is technology good or bad? Attention is then focused on values embedded in computer and information technology. I emphasize how value-laden technology is. The chapter turns, then, to the Internet and democracy. I examine the arguments that are made to show that the Internet is a democratic technology and I critique these arguments.

Computer and information technology and especially the Internet have been implicated in the widening gap between haves and have-nots within countries and among countries of the world. After examining this issue which has come to be known as "the digital divide," I briefly discuss the gender gap in computing, and then I turn briefly to the value of freedom of expression. I conclude this chapter by pointing to three issues that will be particularly important to watch in the future: jurisdiction, systems of trust, and insularity.

Practical Ethics
Finally it may be helpful to explain what I have and have not tried to do with regard to ethical theory. I have not aimed to provide "the" ethical theory from which all ethical beliefs should be derived. Nor have I tried to provide the only possible or adequate ethical analysis of any particular issue. There are both pedagogical and theoretical reasons for taking this stance.

Pedagogically, I believe it is essential for students and other readers to struggle with the cases, the issues, and the relevant moral concepts and theories. This is crucial to developing skill at ethical analysis and critical thinking, and to developing moral personhood. Hence, it would be somewhat counterproductive to present (what I claim to be) "the" final answers to all the ethical questions raised in this book. Instead I have left a good deal of room for further struggle with the issues. I have tried to present ethics as an ongoing, dialectic enterprise. My analyses are intended to move the discussion forward but not to end it.

Moreover, this book is an undertaking in practical ethics, and practical ethics is the middle ground where abstract ethical theories and concepts meet real-world problems and decisions. It takes an enormous amount of work to understand what theories mean for real-world situations, issues, and decisions, and in some sense, we don't understand theories until we understand what they imply about real-world situations. Practical ethics is best understood as the domain in which there is negotiation between theory and real-world situations. We draw on moral concepts and theories but we must interpret them and draw out their implications for the issues at hand. In practical ethics, we work both ways, from theory to context and from context to theory. Often a theory or several theories provide illumination on a practical matter; other times, struggle with the practical problem leads to new insight into a theory.

TERMINOLOGY

As in earlier editions, I use ethics and morality interchangeably. Many philosophers draw a distinction between the two, but I find such distinctions too far from conventional usage to be of help.

Since the publication of the previous edition, linguistic conventions around computer technology have changed. Computers have come to be understood as more than simply computational—computing—machines. Moreover, there area wide variety of computer-related tools for which it may or may not be appropriate to use the term computer. It was difficult to decide what terminology to adopt to refer to the technology on which this book focuses—information technology (IT), communication technology, information and communication technology (ICT), and so on. Indeed, it was tempting to change the name of the book to something like Information Ethics or IT Ethics, but, in the end, I have concluded that it is best to keep the name of the book the same, if for no other reason than that it is a known product. Wherever possible I use the (cumbersome) phrase computer and information technology rather than just computer technology. This more closely reflects current linguistic practice and understanding of the technology. In fact, the focus of the book may more accurately be described as a focus on a family of technologies that deal with a very wide variety of types of information (signals, data, images, words, etc.).

ACKNOWLEDGMENTS

A number of people helped me with this edition by reading chapters and providing comments and suggestions. I particularly want to thank Keith Miller, Fran Grodzinsky, Roberta Berry, Don Gotterbarn, Andy Ward, and Amy Bruckman for their feedback on various chapters. Marc Pang Quek was a wonderful graduate research assistant and wrote several scenarios based on real cases.

I owe much to many others who have helped me in a wide variety of ways. Scholarship is a collective enterprise. I have learned from reading the ideas of others, from listening and talking to students and colleagues. I am always encouraged by those who are willing to tell me what they like and don't like about my writing.

I will always be grateful to my colleagues in the Department of Science and Technology Studies at Rensselaer Polytechnic Institute. My years there shaped my thinking and my career in profound and permanent ways. In 1998, I moved to the School of Public Policy at Georgia Institute of Technology and I have been delighted to find another robust and lively interdisciplinary environment. My new colleagues and students in public policy and in other units of the Ivan Allen College at Georgia Tech have already enriched my thinking.

In writing this edition, I drew on several of my previously published articles. "Ethics Online" published in Communications of the ACM, a chapter entitled "Democratic Values" in Duncan Langford's book Ethics and the Internet, and "Is the GII a Democratic Technology?" presented at several conferences and then published in Computers and Society. I developed Chapter 1 from an unpublished paper that I wrote for the Tangled Web conference.

As I was completing a draft of this edition, Keith Miller, Laurie King, Tracy Camp, and I received a grant from the National Science Foundation's Program on Course, Curriculum and Laboratory Improvement, to develop materials and hold workshops focused on using the Web to teach computer ethics. The first workshop took place in June of 2000. During the workshop I received extremely valuable feedback on a draft of the book. Materials developed as part of the grant are available at: www.uis.edu/~miller/dolce/ and may be useful to teachers who use this book in their courses.

DEBORAH G. JOHNSON

Read More Show Less

Table of Contents



1. Introduction: Why Computer Ethics?


2. Philosophical Ethics.


3. Professional Ethics.


4. Ethics and the Internet I: Ethics Online.


5. Privacy.


6. Property Rights in Computer Software.


7. Accountability and Computer and Information Technology.


8. Ethics and the Internet II: Social Implications and Social Values.


References.


Index.

Read More Show Less

Preface

PREFACE:

PREFACE

With the publication of the third edition of Computer Ethics, I am reminded of the day in 1984 when I received the page-proofs of the first edition. I had just returned home from the hospital after having given birth to my daughter. I had composed the book on an Osborne computer using a word processor—I think it was called WordStar—that has been obsolete for more than 10 years now. Today my daughter, now a teenager, is more comfortable with computers than I am. She spends a good deal of her day sitting in front of a computer screen chatting with friends, doing schoolwork, and exploring the Web. I composed this edition of the book on a laptop computer using a version of MS Word that automatically corrected my misspellings and grammar. And, of course, in writing this edition of the book, I frequently went to the Web to look for resources and check references. While I continue to be cautious in making grand pronouncements about the significance of these technological changes for the quality and character of human lives, the changes that have taken place in these 16 years are awe-inspiring.

As I began writing this edition, it was strikingly clear that my primary task was to address the technological changes that have occurred since the second edition, especially the growth and penetration of the Internet into so many domains of life. What are we to make of Web sites, cookies, data mining tools, customized online services, and e-commerce? I have addressed many of these new issues while at the same time holding on to what I continue to believe are the core issues in computer ethics: professional ethics, privacy,property,accountability, and social implications and values. Indeed, you will see that in Chapter 1, 1 continue to struggle with the question at the heart of the field, what is computer ethics? Are the ethical issues surrounding computers unique? What is the connection between ethics and technology?

Contemplating the connection between technology and ethics raises an interesting and important question: Does the field of computer ethics simply follow the development of computer technology? Should computer ethicists simply react to technological developments? Wouldn't it be better if the sequence were reversed so that technological development followed ethics? Historically, the field of computer ethics has been reactive to the technology. As I explain in Chapter 1, new technological developments create new possibilities and the new possibilities need to be evaluated. As in the last edition, I build on the idea in Jim Moor's seminal piece "What Is Computer Ethics?" (1985) that new technologies create policy vacuums. The task of computer ethics, he argues, is to fill these policy vacuums. In a sense, the ethical issues are the policy vacuums, and policy vacuums are created when there is a new development or use of computer technology.

On the other hand, I want to suggest that it would be better if at least some of the movement were in the other direction—technology following ethics. Suppose, that is, we lived in a world where ethicists (or anyone, for that matter) identified potentially unethical situations or arrangements or ethically better possibilities, and engineers and computer scientists went to work designing technologies to change or remedy or improve the situation. I can think of a few examples when this has occurred, but only a few. Arguably, privacy-enhancing technologies and anonymous re-mailers are cases in point. Perhaps freeware and shareware are also examples. For the most part, however, the ethical issues have followed, rather than led, the technology. Here in very broad brushstrokes is my understanding of the evolution of the field of computer ethics, especially in the United States.

HISTORICAL OVERVIEW

In the decades immediately following World War II, ethical concerns were raised about computers, though these concerns were only vaguely expressed and articulated. One of the most salient concerns was that computers threatened our notion of what it means to be human because computers could do the very thing that was considered unique to humans, rational thinking. There was much discussion of artificial intelligence. There was some fear (and fascination with the idea) that computers might take over decision making from humans. I am thinking here of the movie 2001 but the theme also ran through science fiction literature, for example, in Issac Asimov's short stories. Somewhat later, Jim Moor picked up on this theme and wrote an analytical article, "Are There Decisions That Computers Should Never Make?" (1979).

It could be argued that those very early concerns about computers were not exactly ethical in character. For example, no one explicitly argued that it was immoral to go forward with the development of computers because of the threat to our concept of human beings. And the science fiction literature did not suggest that it was immoral to turn over decision-making power to computers. Rather, the implicit argument seemed to be that there would be terrible consequences—possible catastrophes and degradation of human life—were decision making to be turned over to computers.

These concerns did not come from an effect arising from the use of computers; they arose from the mere idea of computers. The very idea of a technology that could think or do something very close to it was threatening to our understanding of what it means to be human.

Ironically, it could be argued that this idea, the idea that computers do what humans do, has turned out to be rich in its influence on human thinking about thinking, rather than a threat. The model of human thought that computers provide has spawned the thriving new field of cognitive science and changed a number of related disciplines. (See for example, Bynum and Moor, 1999.)

In the late 1970s, the ethical issues began to be more clearly articulated in the works of Joseph Weizenbaum (19'79) and Abbe Mowshowitz (1976), and it was in this period that the Privacy Protection Commission did a major study of privacy. The issues that took shape in this period had to do with the threat of big government and large-scale organizations, the related threat to privacy, and concern about the dominance of instrumental rationality. In hindsight, the concern about big government and privacy followed the technology in that in those early days, computers were being used extensively to create and maintain huge databases, databases of a variety of kinds, but especially databases of personal information. Computers were also being used for large numerical calculations. The large-scale calculations were primarily (though not exclusively) for government activities such as weapons development, space travel, and the U.S. census.

The next major technological shift was the development of small computers (microcomputers and personal computers). Attention turned, for a time at least, to the democratizing aspects of computers. Quietly, at the same time, remote access had come on the scene, first as remote access to large mainframes, later as a web of telecommunications connections between small computers.

Attention turned to software and the ethical issues surrounding it. The development and spread of microcomputers brought computer technology visibly and powerfully into the consumer marketplace. Software was recognized as something with enormous market value, and hence, all the ethical issues having to do with property arose. Should software be owned? If so, how? Would current intellectual property law provide adequate protection? Along with property rights issues came issues of liability and responsibility. In the marketplace, if consumers buy and use computers and software, they want to be able to rely on these tools and when something goes wrong, they want to know who to blame or they want to be compensated for their losses.

During this period, the market in computer games took off and it was also during this period that more attention began to focus on hackers. On the one hand, hackers were responding to the commercialization of computing. They did not like the idea of property rights in software. At the same time, those who were acquiring property rights and/or making a business of computing saw the threat posed by hackers, a threat to property rights and to system security.

In the 1990s, attention turned to the Internet. The coming together of computers, telecommunications, and media was the next major development in the technology. The development and expanded use of the Internet brought a seemingly endless set of ethical issues as the Internet came to be used in so many different ways in so many different domains of life. In effect, we are now in a process of transferring and re-creating much of the world into this new medium. At the same time, the Internet also raised all the concerns of the past. Privacy issues are exacerbated on the Internet; the democracy issue came back into play with new claims about the Internet's democratic character; property rights expanded to Web sites and global property rights became ever more important; and so on.

One other technological development that grew slowly during the 1980s and 1990s was the use of computer technology for a wide variety of visualization activities—not just computer graphics and gaming, but simulation activities including medical imagining and scientific models. This development expanded into the idea of virtual reality, an idea that has captivated many. Very quietly and slowly, ethical concerns have been raised about this thrust of computer technology. Unfortunately, I have been able to give only cursory attention to virtual reality issues.

In summary, during the 1960s and 1970s the dominant uses of the technology were for database creation and large-scale calculations. These uses of the technology brought correlated expressions of concern about centralization of power and big government, and threats to personal privacy. During this time, the very idea of computers seemed to threaten the idea of what it means to be human. During the 1980s, microcomputers were developed and made readily available. Remote access to large mainframe computers also became possible. Quietly, the system of telecommunication lines linking computers, that later became the Internet, was expanding and being made available beyond the "inner circle" of developers. Also, the computer/video game industry began to take off. With these developments came correlative concerns about property rights, liability issues, and the threat posed by hackers. In the 1990s, the coming together of telecommunications and computers reached a pinnacle of development and the Internet and the World Wide Web (Web) became widely available. These technological developments are still being assimilated, but they gave rise to a seemingly endless array of ethical issues as well as exacerbating those that were already there.

This is a story of computer ethical issues following technological developments. The question remains whether this pattern is as it should be. As I suggested before, reversing the order would seem to have some advantages, though scholars in the field of computer ethics do not seem to recognize the possibility of leading rather than following the technology. A central focus on the topic of design of computer technology would go a long way toward reversing this pattern. If the designers of technology were to think about the ethical and social implications of their designs before they became a reality, wouldn't the world be a better place!

CHANGES IN THE THIRD EDITION

Readers who are familiar with earlier editions of Computer Ethics will note that in this edition I have added two chapters specifically focused on the Internet, Chapter 4 "Ethics Online" and Chapter 8 "Social Implications and Social Values." The addition of this new material led to other changes in the organization of the book. First, instead of having a separate chapter on crime, abuse, and hacker ethics, I have situated the discussion of hackers and hacker ethics in the first chapter on the Internet, Chapter 4. This placement recognizes that hacking is a phenomenon made possible by the combination of computers and telecommunications lines that we now call the Internet. In 1994 when the second edition was published, the Internet had already been created, but it was far from clear that it would become what it has. Second, instead of having one chapter on the social implications of computer and information technology and another on the social implications of the Internet, I have combined material on the social implications of computer technology from the second edition with new material on the Internet. While I discuss both, the primary focus of Chapter 8 is on the social implications of the Internet and especially its social implications for democracy. I found this approach useful for focusing discussion of the relationships between technology and social change and between values and technology.

As with previous editions, there are many possible paths a reader might take through the book. The topics from chapter to chapter are interconnected, but each chapter has been written to stand essentially alone. When used as a textbook, the path students take through the book should be determined by the type of students being taught, the length of the course, and other books and materials being used in the course. For example, when teaching a class of computer science majors, it is important that the chapter on professional ethics be read early on. This sets students up to think of the issues as part of their professional responsibility. When teaching nonmajors, this chapter can comfortably be read at the end of a course, and can be presented as a way of thinking about how some of the issues discussed in the book might be addressed—control who becomes a computer professional and give computer professionals more responsibility for the effects of their work.

As in the previous editions, I have started each chapter with a set of short scenarios. The scenarios are intended to entice the reader into the topic, to implicitly make the case for the importance of the topic, and to make the topic concrete for those who are impatient with theory. The cases also provide the content for teaching skills in ethical analysis. As before, I have provided study questions and suggested further reading at the end of each chapter.

OVERVIEW

Chapter 1: Introduction: Why Computer Ethics?
In Chapter 1, I make the case for the importance of computer ethics and I explore why computer and information technology raises ethical questions when many other technologies do not. Building on Moor's idea that the task of computer ethics is to fill policy vacuums, I describe generally how computer and information technology gives rise to ethical issues. I push further addressing how these issues can be resolved and explore the traditionalist account which specifies that we can extend ordinary moral principles to situations created by computer technology. This discussion prepares the way for asking in what ways computer ethical issues are unique and in what ways not. As in the last edition, I argue that it is useful to think of the ethical issues surrounding computer and information technology as new species of generic moral issues. I support this idea by arguing that while ethics is always about human action, technology instruments human action and technology makes it possible for individuals and institutions to behave in ways they couldn't behave without technology. Traditional ethics and ethical theories have largely ignored the instrumentation of human action. Computer ethics brings this unexplored area of ethics into focus. I conclude this chapter with a brief discussion of the virtues and dangers of using analogies in analyzing computer ethical issues.

Chapter 2: Philosophical Ethics
This chapter is largely as it was in the second edition though I have added brief descriptions of virtue ethics and John Rawls' theory of justice. As before, the aim of this chapter is to show that ethics is not just a subjective and relativistic enterprise. The aim is to show that ethics and ethical analysis involves giving reasons and making arguments for one's claims and subjecting those reasons and arguments to critical evaluation. By reviewing traditional ethical theories, the chapter provides readers with a useful vocabulary and conceptual tools for thinking through ethical issues. The chapter is not intended to provide the theoretical framework from which answers to all ethical questions can be deduced. Rather, the aim of this chapter is to suggest that ethical analysis is a dialectic process.

Chapter 3: Professional Ethics
The organization of this chapter and the ideas explored are fundamentally the same as in the last edition of Computer Ethics though I have tried to clarify the ideas further, and I have updated the chapter by addressing the issue of licensing of software engineers. I have also recognized recent changes to the ACM code of ethics. The chapter begins with a discussion of how becoming a member of a profession can lead to somewhat special moral rights and responsibilities. That analysis sets the scene for defining profession and professional, and for asking whether computing is a profession. This is followed by a brief discussion of software engineering licensing. The focus of the chapter then turns to the responsibilities of computer professionals, to employers, to clients, to the public, and to co-professionals, and how they come into conflict. The chapter ends with a brief discussion of professional codes of ethics.

Chapter 4: Ethics and the Internet: Ethics Online
I begin this chapter by identifying what is morally significant and distinct about the Internet. Focusing on the Internet as a medium of communication, what seems morally significant is the many-to-many global scope of the Internet, the availability of a certain kind of anonymity, and the reproducibility of the medium. After drawing out the implications of these features, emphasizing the difficulties of accountability and trust, I move on to discuss hacking and hacker ethics. Here I have included some of the material from the previous edition. I conclude the chapter with a discussion of the problems the Internet seems to pose for controlling socially undesirable behavior and for encouraging civil behavior.

Chapter 5: Privacy
Chapter 5 is a combination of old and new material. As in the previous edition, I begin by asking how computer and information technology has changed the collection and distribution of personal information. I describe the traditional way in which the privacy issue has been framed—as necessarily involving a trade-off between individual interests in controlling information and the efficiency and improved decision making of those who can make use of the information. I argue for reframing the issue in a way that recognizes personal privacy not just as an individual good but as a social good, and I try to make clear the importance of privacy for democracy. I conclude the chapter by discussing a variety of possible approaches to improving the protection of personal privacy.

Chapter 6: Property
In the years since I wrote the first edition of Computer Ethics, the property rights issues have gotten more and more complicated. While there is dissatisfaction with current law, in fact, the law has not changed fundamentally. I have come to the conclusion that the most useful approach for an introductory text of this kind is to stay with the fundamentals. Thus, this chapter is very similar to the property rights chapter in the second edition. I begin by describing the problem that ownership of software poses. I describe copyright, trade secrecy, and patent law and the inadequacies each has for protecting computer software. Digging deeper into the problem, I explore the philosophical basis for property rights looking first at the natural rights arguments and then at the utilitarian arguments for and against ownership. I conclude with an argument similar to the one I made in the second edition: Making an illegal copy of proprietary software is immoral because it is illegal. While it is immoral because it is illegal, there are other kinds of immorality that would be immoral even if they were legal. I conclude with a brief discussion of how the Internet is likely to exacerbate property rights issues.

Chapter 7: Accountability
Chapter 7 begins with scenarios that pose a wide range of accountability issues: responsibility for rape in a virtual reality game, accountability when software recommends a decision, liability of Internet Service Providers, and responsibility for the Y2K problem. A discussion of the different meaning and uses of terms such as responsibility, accountability, liability, and blame lays the groundwork for the chapter. The focus then turns to the legal environment for the buying and selling of computer and information technology where the distinction between selling a product and providing a service is pivotal. The remainder of the chapter is devoted to issues that are unique to computer and information technology, especially the diffusion of accountability, the Y2K problem, and Internet issues.

Chapter 8: Ethics and the Internet II: Social Implications and Social Values
Chapter 8 is the second chapter devoted to the Internet. Drawing on material from the second edition, I begin this chapter with a general discussion of technology and social change and identify the pitfalls in asking questions such as, Is computer and information technology causing a social revolution? Is it changing things or reinforcing the status quo? Is technology good or bad? Attention is then focused on values embedded in computer and information technology. I emphasize how value-laden technology is. The chapter turns, then, to the Internet and democracy. I examine the arguments that are made to show that the Internet is a democratic technology and I critique these arguments.

Computer and information technology and especially the Internet have been implicated in the widening gap between haves and have-nots within countries and among countries of the world. After examining this issue which has come to be known as "the digital divide," I briefly discuss the gender gap in computing, and then I turn briefly to the value of freedom of expression. I conclude this chapter by pointing to three issues that will be particularly important to watch in the future: jurisdiction, systems of trust, and insularity.

Practical Ethics
Finally it may be helpful to explain what I have and have not tried to do with regard to ethical theory. I have not aimed to provide "the" ethical theory from which all ethical beliefs should be derived. Nor have I tried to provide the only possible or adequate ethical analysis of any particular issue. There are both pedagogical and theoretical reasons for taking this stance.

Pedagogically, I believe it is essential for students and other readers to struggle with the cases, the issues, and the relevant moral concepts and theories. This is crucial to developing skill at ethical analysis and critical thinking, and to developing moral personhood. Hence, it would be somewhat counterproductive to present (what I claim to be) "the" final answers to all the ethical questions raised in this book. Instead I have left a good deal of room for further struggle with the issues. I have tried to present ethics as an ongoing, dialectic enterprise. My analyses are intended to move the discussion forward but not to end it.

Moreover, this book is an undertaking in practical ethics, and practical ethics is the middle ground where abstract ethical theories and concepts meet real-world problems and decisions. It takes an enormous amount of work to understand what theories mean for real-world situations, issues, and decisions, and in some sense, we don't understand theories until we understand what they imply about real-world situations. Practical ethics is best understood as the domain in which there is negotiation between theory and real-world situations. We draw on moral concepts and theories but we must interpret them and draw out their implications for the issues at hand. In practical ethics, we work both ways, from theory to context and from context to theory. Often a theory or several theories provide illumination on a practical matter; other times, struggle with the practical problem leads to new insight into a theory.

TERMINOLOGY

As in earlier editions, I use ethics and morality interchangeably. Many philosophers draw a distinction between the two, but I find such distinctions too far from conventional usage to be of help.

Since the publication of the previous edition, linguistic conventions around computer technology have changed. Computers have come to be understood as more than simply computational—computing—machines. Moreover, there area wide variety of computer-related tools for which it may or may not be appropriate to use the term computer. It was difficult to decide what terminology to adopt to refer to the technology on which this book focuses—information technology (IT), communication technology, information and communication technology (ICT), and so on. Indeed, it was tempting to change the name of the book to something like Information Ethics or IT Ethics, but, in the end, I have concluded that it is best to keep the name of the book the same, if for no other reason than that it is a known product. Wherever possible I use the (cumbersome) phrase computer and information technology rather than just computer technology. This more closely reflects current linguistic practice and understanding of the technology. In fact, the focus of the book may more accurately be described as a focus on a family of technologies that deal with a very wide variety of types of information (signals, data, images, words, etc.).

ACKNOWLEDGMENTS

A number of people helped me with this edition by reading chapters and providing comments and suggestions. I particularly want to thank Keith Miller, Fran Grodzinsky, Roberta Berry, Don Gotterbarn, Andy Ward, and Amy Bruckman for their feedback on various chapters. Marc Pang Quek was a wonderful graduate research assistant and wrote several scenarios based on real cases.

I owe much to many others who have helped me in a wide variety of ways. Scholarship is a collective enterprise. I have learned from reading the ideas of others, from listening and talking to students and colleagues. I am always encouraged by those who are willing to tell me what they like and don't like about my writing.

I will always be grateful to my colleagues in the Department of Science and Technology Studies at Rensselaer Polytechnic Institute. My years there shaped my thinking and my career in profound and permanent ways. In 1998, I moved to the School of Public Policy at Georgia Institute of Technology and I have been delighted to find another robust and lively interdisciplinary environment. My new colleagues and students in public policy and in other units of the Ivan Allen College at Georgia Tech have already enriched my thinking.

In writing this edition, I drew on several of my previously published articles. "Ethics Online" published in Communications of the ACM, a chapter entitled "Democratic Values" in Duncan Langford's book Ethics and the Internet, and "Is the GII a Democratic Technology?" presented at several conferences and then published in Computers and Society. I developed Chapter 1 from an unpublished paper that I wrote for the Tangled Web conference.

As I was completing a draft of this edition, Keith Miller, Laurie King, Tracy Camp, and I received a grant from the National Science Foundation's Program on Course, Curriculum and Laboratory Improvement, to develop materials and hold workshops focused on using the Web to teach computer ethics. The first workshop took place in June of 2000. During the workshop I received extremely valuable feedback on a draft of the book. Materials developed as part of the grant are available at: www.uis.edu/~miller/dolce/ and may be useful to teachers who use this book in their courses.

DEBORAH G. JOHNSON

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)