- Shopping Bag ( 0 items )
Most organizations have a firewall, antivirus software, and intrusion detection systems, all of which are intended to keep attackers out. So why is computer security a bigger problem today than ever before? The answer is simple--bad software lies at the heart of all computer security problems. Traditional solutions simply treat the symptoms, not the problem, and usually do so in a reactive way. This book teaches you how to take a proactive approach to computer security.
Building Secure Software cuts to the heart of computer security to help you get security right the first time. If you are serious about computer security, you need to read this book, which includes essential lessons for both security professionals who have come to realize that software is the problem, and software developers who intend to make their code behave. Written for anyone involved in software development and use—from managers to coders—this book is your first step toward building more secure software. Building Secure Software provides expert perspectives and techniques to help you ensure the security of essential software. If you consider threats and vulnerabilities early in the devel-opment cycle you can build security into your system. With this book you will learn how to determine an acceptable level of risk, develop security tests, and plug security holes before software is even shipped.
Inside you'll find the ten guiding principles for software security, as well as detailed coverage of:
Only by building secure software can you defend yourself against security breaches and gain the confidence that comes with knowing you won't have to play the "penetrate and patch" game anymore. Get it right the first time. Let these expert authors show you how to properly design your system; save time, money, and credibility; and preserve your customers' trust.
". . . any program, no matter how innocuous it seems,
can harbor security holes. ...We thus have a firm belief
that everything is guilty until proven innocent."
—William Cheswick and Steve Bellovin
Firewalls and Internet Security
Computer security is an important topic. As e-commerce blossoms, and the Internet works its way into every nook and cranny of our lives, security and privacy come to play an essential role. Computer security is moving beyond the realm of the technical elite, and is beginning to have a real impact on our everyday lives.
It is no big surprise, then, that security seems to be popping up every-where, from headline news to TV talk shows. Because the general public doesn't know very much about security, a majority of the words devoted to computer security cover basic technology issues such as what firewalls are, what cryptography is, or which antivirus product is best. Much of the rest of computer security coverage centers around the "hot topic of the day," usually involving an out-of-control virus or a malicious attack. Historically, the popular press pays much attention to viruses and denial-of-service attacks: Many people remember hearing about the Anna Kournikova worm, the "Love Bug," or the Melissa virus ad nauseam. These topics are important, to be sure. Nonetheless, the media generally manages not to get to the heart of the matter when reporting these subjects. Behind every computer security problem and malicious attack lies a common enemy— bad software.
The biggest problem in computer security today is that many security practitioners don't know what the problem is. Simply put, it's the software! You may have the world's best firewall, but if you let people access an application through the firewall and the code is remotely exploitable, then the firewall will not do you any good (not to mention the fact that the firewall is often a piece of fallible software itself). The same can be said of cryptography. In fact, 85% of CERT security advisories1 could not have been prevented with cryptography [Schneider, 1998].
Data lines protected by strong cryptography make poor targets. Attackers like to go after the programs at either end of a secure communi-cations link because the end points are typically easier to compromise. As security professor Gene Spafford puts it, "Using encryption on the Internet is the equivalent of arranging an armored car to deliver credit card information from someone living in a cardboard box to someone living on a park bench." Internet-enabled applications, including those developed internally by a business, present the largest category of security risk today. Real attackers compromise software. Of course, software does not need to be Internet enabled to be at risk. The Internet is just the most obvious avenue of attack in most systems.
This book is about protecting yourself by building secure software. We approach the software security problem as a risk management problem. The fundamental technique is to begin early, know your threats, design for security, and subject your design to thorough objective risk analyses and testing. We provide tips and techniques that architects, developers, and managers can use to produce Internet-based code that is as secure as necessary.
A good risk management approach acknowledges that security is often just a single concern among many, including time-to-market, cost, flexibility, reusability, and ease of use. Organizations must set priorities, and identify the relative costs involved in pursuing each. Sometimes security is not a high priority.
Some people disagree with a risk management security approach. Many people would like to think of security as a yes-or-no, black-or-white affair, but it's not. You can never prove that any moderately complex system is secure. Often, it's not even worth making a system as secure as possible, because the risk is low and the cost is high. It's much more realistic to think of software security as risk management than as a binary switch that costs a lot to turn on. Software is at the root of all common computer security problems. If your software misbehaves, a number of diverse sorts of problems can crop up: reliability, availability, safety, and security. The extra twist in the security situation is that a bad guy is actively trying to make your software misbehave. This certainly makes security a tricky proposition.
Malicious hackers don't create security holes; they simply exploit them. Security holes and vulnerabilities—the real root cause of the problem—are the result of bad software design and implementation. Bad guys build exploits (often widely distributed as scripts) that exploit the holes. (By the way, we try to refer to bad guys who exploit security holes as malicious hackers instead of simply hackers throughout this book. See the sidebar for more details.)
The term hacker originally had a positive meaning. It sprang from the computer science culture at the Massachusetts Institute of Technology during the late 1960s, where it was used as a badge of honor to refer to people who were exceptionally good at solving tricky problems through programming, often as if by magic. For most people in the UNIX development community, the term's preferred definition retains that meaning, or is just used to refer to someone who is an excellent and enthusiastic programmer. Often, hackers like to tinker with things, and figure out how they work. Software engineers commonly use the term hacker in a way that carries a slightly negative connotation. To them, a hacker is still the programming world's equivalent of MacGyver—someone who can solve hard programming problems given a ball of fishing line, a matchbook, and two sticks of chewing gum. The problem for software engineers is not that hackers (by their definition) are malicious; it is that they believe cobbling together an ad hoc solution is at best barely acceptable. They feel careful thought should be given to software design before coding begins. They therefore feel that hackers are developers who tend to "wing it," instead of using sound engineering principles. (Of course, we never do that! Wait! Did we say that we're hackers?) Far more negative is the definition of hacker that normal people use (including the press). To most people, a hacker is someone who maliciously tries to break software. If someone breaks in to your machine, many people would call that person a hacker. Needless to say, this definition is one that the many programmers who consider themselves "hackers" resent.
Do we call locksmiths burglars just because they could break into our house if they wanted to do so? Of course not. But that's not to say that locksmiths can't be bur-glars. And, of course, there are hackers who are malicious, do break into other people's machines, and do erase disk drives. These people are a very small minority compared with the number of expert programmers who consider themselves "hackers."
In the mid 1980s, people who considered themselves hackers, but hated the negative connotations the term carried in most peoples' minds (mainly because of media coverage of security problems), coined the term cracker. A cracker is someone who breaks software for nefarious ends.
Unfortunately, this term hasn't caught on much outside of hacker circles. The press doesn't use it, probably because it is already quite comfortable with "hacker." And it sure didn't help that they called the movie Hackers instead of Crackers. Nonetheless, we think it is insulting to lump all hackers together under a negative light. But we don't like the term cracker either. To us, it sounds dorky, bringing images of Saltines to mind. So when we use the term hacker, that should give you warm fuzzy feelings. When we use the term malicious hacker, attacker, or bad guy, it is okay to scowl. If we say "malicious hacker," we're generally implying that the person at hand is skilled. If we say anything else, they may or may not be.
Some hackers who are interested in security may take a piece of software and try to break it just out of sheer curiosity. If they do break it, they won't do anything malicious; they will instead notify the author of the software, so that the problem can be fixed. If they don't tell the author in a reasonable way, then they can safely be labeled malicious. Fortunately, most people who find serious security flaws don't use their finds to do bad things.
At this point, you are probably wondering who the malicious attackers and bad guys really are! After all, it takes someone with serious programming experience to break software. This may be true in finding a new exploit, but not in exploiting it. Generally, bad guys are people with little or no programming ability capable of downloading, building, and running programs other people write (hackers often call this sort of bad guy a script kiddie). These kinds of bad guys go to a hacker site, such as (the largely defunct) http://www.rootshell.com, download a program that can be used to break into a machine, and get the program to run. It doesn't take all that much skill (other than hitting return a few times). Most of the people we see engage in this sort of activity tend to be teenage boys going through a rebellious period. Despite the fact that such people tend to be relatively unskilled, they are still very dangerous.
So, you might ask yourself, who wrote the programs that these script kiddies used to break stuff? Hackers? And if so, then isn't that highly unethical? Yes, hackers tend to write most such programs. Some of those hackers do indeed have malicious intent, and are truly bad people. However, many of these programs are made public by hackers who believe in the principle of full disclosure. The basic idea behind this principle is that everyone will be encouraged to write more secure software if all security vulnerabilities in software are made public. It encourages vendors to acknowledge and fix their software and can ex-pose people to the existence of problems before fixes exist.
Of course, there are plenty of arguments against full disclosure too. Although full disclosure may encourage vendors to fix their problems quickly, there is almost always a long delay before all users of a vendor's software upgrade their software. Giving so much information to potential attackers makes it much easier for them to exploit those people. In the year 2000, Marcus Ranum sparked considerable debate on this issue, arguing that full disclosure does more harm than good. Suffice it to say, people without malicious intent sometimes do release working attack code on a regular basis, knowing that there is a potential for their code to be misused. These people believe the benefits of full disclosure outweigh the risks. We provide third-party information on these debates on the book's Web site.
We want to do our part in the war against crummy software by providing lots of information about common mistakes, and by providing good ideas that can help you build more secure code....
1. CERT is an organization that studies Internet security vulnerabilities, and occasionally releases security advisories when there are large security problems facing the Internet. See http://www.cert.org.
"A book is a machine to think with."
--I.A. Richards, Principles of Literary Criticism
This book exists to help people involved in the software development process learn the principles necessary for building secure software. The book is intended for anyone involved in software development, from managers to coders, though it contains the low-level detail that is most applicable to programmers. Specific code examples and technical details are presented in the second part of the book. The first part is more general and is intended to set an appropriate context for building secure software by introducing security goals, security technologies, and the concept of software risk management.
There are plenty of technical books that deal with computer security, but until now, none have addressed significant effort to the topic of developing secure programs. If you want to learn how to set up a firewall, lock down a single host, or build a virtual private network, there are other resources to turn to outside this book. Since most security books are intended to address the pressing concerns of network-level security practitioners, they tend to focus on how to promote secrecy and protect networked resources in a world where software is chronically broken.
Unfortunately, many security practitioners have gotten used to a world where having security problems in software is common, and even acceptable. Some people even assume that it is too hard to get developers to build secure software, so they don't raise the issue. Instead, they focus their efforts on "best practice" network security solutions, erecting firewalls, and trying to detect intrusions and patch known security problems in a timely manner.
We are optimistic that the problem of bad software security can be addressed. The truth is, writing programs that have no security flaws in them is difficult. However, we assert that writing a "secure-enough" program is much easier than writing a completely bug-free program. Should people give up on removing bugs from software just because it's essentially impossible to eliminate them all? Of course not. By the same token, people shouldn't just automatically throw in the software security towel before they even understand the problem.
A little bit of education can go a long way. One of the biggest reasons why so many products have security problems is that many technologists involved in the development process have never learned very much about how to produce secure code. One problem is that up until now there have been very few places to turn for good information. A goal of this book is to close the educational gap, arming software practitioners with the basic techniques necessary to write secure programs.
That said, you should not expect to eradicate all security problems in your software simply by reading this book. Claiming that this book provides a silver bullet for security would ignore the realities of how difficult it is to secure computer software. We don't ignore reality--we embrace it, by treating software security as a risk management problem.
In the real world, your software will likely never be totally secure. First of all, there is no such thing as 100% security. Most software has security risks that can be exploited, it's a matter of how much money and effort is required to break the system in question. Even if your software is bug-free, and your servers are protected by firewalls, someone who wants to target you might get an insider to attack you. Or, they might perform a "black bag" (break-in) operation. Because security is complicated and is a system-wide property, we not only provide general principles for secure software design, but we also focus on the most common risks, and how to mitigate them.
This book is divided into two parts. Part one focuses on the things you should know about software security before you even think about producing code. We focus on how to integrate security into your software engineering practice. Emphasis is placed on methodologies and principles that reduce security risk by getting started early in the development lifecycle. Designing security into a system from the beginning is much easier and orders of magnitude cheaper than retrofitting a system for security later. Not only do we focus on requirements and design, we also provide significant attention to analyzing the security of a system, which we believe to be a critical skill. Part one of the book should be of general interest to anyone involved in software development at any level, from business-level leadership to developers in the trenches.
In part two, we get our hands dirty with implementation-level issues. Even with a solid architecture, there is plenty of room for security problems to be introduced at development time. We show developers in gory detail how to recognize and avoid common implementation-level problems such as buffer overflows and race conditions. Part two of the book is intended for those people who feel comfortable around code.
We purposely cover material that we believe to be of general applicability. That is, unless a topic is security critical, we try to stay away from anything that is dependent on a particular operating system or programming language. For example, we do not discuss POSIX "capabilities" because they are not widely implemented. However, we devote an entire chapter to buffer overflows because they are a problem of extraordinary magnitude, even though a majority of buffer overflows are specific to C and C++.
Since our focus is on technologies that are applicable at the broadest levels, there are plenty of worthy technologies that we do not cover, including Kerberos, PAM (Pluggable Authentication Modules), and mobile code sandboxing, to name a few. Many of these technologies merit their own books (though not all of them are adequately covered today). This book's companion web site provides links to information sources covering interesting security technologies that we left out.
While we cover material that is largely language independent, most of our examples are written in C, mainly because it is so widely used, but also because it is harder to get things right in C than in other languages. Porting our example code to other programming languages is often a matter of finding the right calls or constructs for the target programming language. However, we do include occasional code examples in Python, Java, and Perl, generally in situations where those languages are significantly different from C. All of the code in this book is available on the companion web site.
There is a large Unix bias to this book even though we tried to stick to operating system independent principles. We admit that our coverage of specifics for other operating systems, particularly Windows, leaves something to be desired. While Windows NT is loosely POSIX compliant, in reality Windows programmers tend not to use the POSIX API. For instance, we hear that most Windows programmers do not use the standard C string library, in favor of Unicode string handling routines. But, as of this writing, we still don't know which common functions in the Windows API are susceptible to buffer overflow calls, so we can't provide a comprehensive list. If someone creates such a list in the future, we will gladly post it on the book's web site.
The code we provide in this book has all been tested on a machine running stock Red Hat 6.2. Most of it has been tested on an OpenBSD machine as well. However, we provide the code on an "as is" basis. We will try to make sure that the versions of the code posted on the web site are as portable as possible; but be forewarned, our available resources for ensuring portability are low. We may not have time to help people who can't get code to compile on a particular architecture, but we will be very receptive to readers who send in patches.