--Annotation c. Book News, Inc., Portland, OR (booknews.com)
Building Internet Firewalls: Internet and Web Securityby Elizabeth D. Zwicky, Simon Cooper, D. Brent Chapman
In the five years since the first edition of this classic book was published, Internet use has exploded. The commercial world has rushed headlong into doing business on the Web, often without integrating sound security technologies and policies into their products and methods. The security risks--and the need to protect both business and personal data--have
In the five years since the first edition of this classic book was published, Internet use has exploded. The commercial world has rushed headlong into doing business on the Web, often without integrating sound security technologies and policies into their products and methods. The security risks--and the need to protect both business and personal data--have never been greater. We've updated Building Internet Firewalls to address these newer risks.
What kinds of security threats does the Internet pose? Some, like password attacks and the exploiting of known security holes, have been around since the early days of networking. And others, like the distributed denial of service attacks that crippled Yahoo, E-Bay, and other major e-commerce sites in early 2000, are in current headlines.
Firewalls, critical components of today's computer networks, effectively protect a system from most Internet security threats. They keep damage on one part of the network--such as eavesdropping, a worm program, or file damage--from spreading to the rest of the network. Without firewalls, network security problems can rage out of control, dragging more and more systems down.
Like the bestselling and highly respected first edition, Building Internet Firewalls, 2nd Edition, is a practical and detailed step-by-step guide to designing and installing firewalls and configuring Internet services to work with a firewall. Much expanded to include Linux and Windows coverage, the second edition describes:
- Firewall technologies: packet filtering, proxying, network address translation, virtual private networks
- Architectures such as screening routers, dual-homed hosts, screened hosts, screened subnets, perimeter networks, internal firewalls
- Issues involved in a variety of new Internet services and protocols through a firewall
- Email and News
- File transfer and sharing services such as NFS, Samba
- Remote access services such as Telnet, the BSD "r" commands, SSH, BackOrifice 2000
- Real-time conferencing services such as ICQ and talk
- Naming and directory services (e.g., DNS, NetBT, the Windows Browser)
- Authentication and auditing services (e.g., PAM, Kerberos, RADIUS);
- Administrative services (e.g., syslog, SNMP, SMS, RIP and other routing protocols, and ping and other network diagnostics)
- Intermediary protocols (e.g., RPC, SMB, CORBA, IIOP)
- Database protocols (e.g., ODBC, JDBC, and protocols for Oracle, Sybase, and Microsoft SQL Server)
The book's complete list of resources includes the location of many publicly available firewall construction tools.
--Annotation c. Book News, Inc., Portland, OR (booknews.com)
- O'Reilly Media, Incorporated
- Publication date:
- Sold by:
- Barnes & Noble
- NOOK Book
- File size:
- 6 MB
Read an Excerpt
Chapter 13: Internet Services and FirewallsThis chapter gives an overview of the issues involved in using Internet services through a firewall, including the risks involved in providing services and the attacks against them, ways of evaluating implementations, and ways of analyzing services that are not detailed in this book.
The remaining chapters in Part III describe the major Internet services: how they work, what their packet filtering and proxying characteristics are, what their security implications are with respect to firewalls, and how to make them work with a firewall. The purpose of these chapters is to give you the information that will help you decide which services to offer at your site and to help you configure these services so they are as safe and as functional as possible in your firewall environment. We occasionally mention things that are not, in fact, Internet services but are related protocols, languages, or APIs that are often used in the Internet context or confused with genuine Internet services.
These chapters are intended primarily as a reference; they're not necessarily intended to be read in depth from start to finish, though you might learn a lot of interesting stuff by skimming this whole part of the book.
At this point, we assume that you are familiar with what the various Internet services are used for, and we concentrate on explaining how to provide those services through a firewall. For introductory information about what particular services are used for, see Chapter 2, Internet Services.
Where we discuss the packet filtering characteristics of particular services, we use the same abstract tabular form we used to show filtering rules in Chapter 8, Packet Filtering. You'll need to translate various abstractions like "internal", "external", and so on to appropriate values for your own configuration. See Chapter 8 for an explanation of how you can translate abstract rules to rules for particular products and packages, as well as more information on packet filtering in general.
Where we discuss the proxy characteristics of particular services, we rely on concepts and terminology discussed in Chapter 9, Proxy Systems.
Throughout the chapters in Part III, we'll show how each service's packets flow through a firewall. The following figures show the basic packet flow: when a service runs directly (Figure 13-1) and when a proxy service is used (Figure 13-2). The other figures in these chapters show variations of these figures for individual services. If there are no specific figures for a particular service, you can assume that these generic figures are appropriate for that service.
TIP: We frequently characterize client port numbers as "a random port number above 1023". Some protocols specify this as a requirement, and on others, it is merely a convention (spread to other platforms from Unix, where ports below 1024 cannot be opened by regular users). Although it is theoretically allowable for clients to use ports below 1024 on non-Unix platforms, it is extraordinarily rare: rare enough that many firewalls, including ones on major public sites that handle clients of all types, rely on this distinction and report never having rejected a connection because of it.
Attacks Against Internet Services
As we discuss Internet services and their configuration, certain concepts are going to come up repeatedly. These reflect the process of evaluating exactly what risks a given service poses. These risks can be roughly divided into two categories--first, attacks that involve making allowed connections between a client and a server, including:
- Command-channel attacks
- Data-driven attacks
- Third-party attacks
- False authentication of clients and second, those attacks that get around the need to make connections, including:
- Packet sniffing
- Data injection and modification
- Denial of service
A command-channel attack is one that directly attacks a particular service's server by sending it commands in the same way it regularly receives them (down its command channel). There are two basic types of command-channel attacks; attacks that exploit valid commands to do undesirable things, and attacks that send invalid commands and exploit server bugs in dealing with invalid input.
If it's possible to use valid commands to do undesirable things, that is the fault of the person who decided what commands there should be. If it's possible to use invalid commands to do undesirable things, that is the fault of the programmer(s) who implemented the protocol. These are two separate issues and need to be evaluated separately, but you are equally unsafe in either case.
The original headline-making Internet problem, the 1988 Morris worm, exploited two kinds of command-channel attacks. It attacked Sendmail by using a valid debugging command that many machines had left enabled and unsecured, and it attacked finger by giving it an overlength command, causing a buffer overflow.
A data-driven attack is one that involves the data transferred by a protocol, instead of the server that implements it. Once again, there are two types of data-driven attacks; attacks that involve evil data, and attacks that compromise good data. Viruses transmitted in electronic mail messages are data-driven attacks that involve evil data. Attacks that steal credit card numbers in transit are data-driven attacks that compromise good data.
A third-party attack is one that doesn't involve the service you're intending to support at all but that uses the provisions you've made to support one service in order to attack a completely different one. For instance, if you allow inbound TCP connections to any port above 1024 in order to support some protocol, you are opening up a large number of opportunities for third-party attacks as people make inbound connections to completely different servers.
False Authentication of Clients
A major risk for inbound connections is false authentication: the subversion of the authentication that you require of your users, so that an attacker can successfully masquerade as one of your users. This risk is increased by some special properties of passwords.
In most cases, if you have a secret you want to pass across the network, you can encrypt the secret and pass it that way. That doesn't help if the information doesn't have to be understood to be used. For instance, encrypting passwords will not work because an attacker who is using packet sniffing can simply intercept and resend the encrypted password without having to decrypt it. (This is called a playback attack because the attacker records an interaction and plays it back later.) Therefore, dealing with authentication across the Internet requires something more complex than encrypting passwords. You need an authentication method where the data that passes across the network is nonreusable, so an attacker can't capture it and play it back.
Simply protecting you against playback attacks is not sufficient, either. An attacker who can find out or guess what the password is doesn't need to use a playback attack, and systems that prevent playbacks don't necessarily prevent password guessing. For instance, Windows NT's challenge/response system is reasonably secure against playback attacks, but the password actually entered by the user is the same every time, so if a user chooses to use "password", an attacker can easily guess what the password is.
Furthermore, if an attacker can convince the user that the attacker is your server, the user will happily hand over his username and password data, which the attacker can then use immediately or at leisure. To prevent this, either the client needs to authenticate itself to the server using some piece of information that's not passed across the connection (for instance, by encrypting the connection) or the server needs to authenticate itself to the client.
Hijacking attacks allow an attacker to take over an open terminal or login session from a user who has been authenticated and authorized by the system. Hijacking attacks generally take place on a remote computer, although it is sometimes possible to hijack a connection from a computer on the route between the remote computer and your local computer.
How can you protect yourself from hijacking attacks on the remote computer? The only way is to allow connections only from remote computers whose security you trust; ideally, these computers should be at least as secure as your own. You can apply this kind of restriction by using either packet filters or modified servers. Packet filters are easier to apply to a collection of systems, but modified servers on individual systems allow you more flexibility. For example, a modified FTP server might allow anonymous FTP from any host, but authenticated FTP only from specified hosts. You can't get this kind of control from packet filtering. Under Unix, connection control at the host level is available from Wietse Venema's TCP Wrapper or from wrappers in TIS FWTK (the netacl program); these may be easier to configure than packet filters but provide the same level of discrimination -- by host only.
Hijacking by intermediate sites can be avoided using end-to-end integrity protection. If you use end-to-end integrity protection, intermediate sites will not be able to insert authentic packets into the data stream (because they don't know the appropriate key and the packets will be rejected) and therefore won't be able to hijack sessions traversing them. The IETF IPsec standard provides this type of protection at the IP layer under the name of "Authentication Headers", or AH protocol (RFC 2402). Application layer hijacking protection, along with privacy protection, can be obtained by adding a security protocol to the application; the most common choices for this are Transport Layer Security (TLS) or the Secure Socket Layer (SSL), but there are also applications that use the Generic Security Services Application Programming Interface (GSSAPI). For remote access to Unix systems the use of SSH can eliminate the risk of network-based session hijacking. IPsec, TLS, SSL, and GSSAPI are discussed further in Chapter 14, Intermediary Protocols. ssh is discussed in Chapter 18, Remote Access to Hosts.
Hijacking at the remote computer is quite straightforward, and the risk is great if people leave connections unattended. Hijacking from intermediate sites is a fairly technical attack and is only likely if there is some reason for people to target your site in particular. You may decide that hijacking is an acceptable risk for your own organization, particularly if you are able to minimize the number of accounts that have full access and the time they spend logged in remotely. However, you probably do not want to allow hundreds of people to log in from anywhere on the Internet. Similarly, you do not want to allow users to log in consistently from particular remote sites without taking special precautions, nor do you want users to log in to particularly secure accounts or machines from the Internet.
The risk of hijacking can be reduced by having an idle session policy with strict enforcement of timeouts. In addition, it's useful to have auditing controls on remote access so that you have some hope of noticing if a connection is hijacked...
Meet the Author
Zwicky is a director of Counterpane Internet Security, a managed security services company. She has been doing large-scale Unix system administration and related work for 15 years, and was a founding board member of both the System Administrators Guild (SAGE) and BayLISA (the San Francisco Bay Area system administrator group), as well as a nonvoting member of the first board of the Australian system administration group, SAGE-AU. She has been involuntarily involved in Internet security since before the 1988 Morris Internest worm. In her lighter moments, she is one of the few people who makes significant use of the rand function in PostScript, producing PostScript documents that are different every time they're printed.
Cooper is a computer professional currently workin in Silicon Valley. He has worked in different computer-related fields ranging from hardware through operating systems and device drivers to application software and systems supportin both commercial and educational environments. He has an interest in the activities of the Internet Engineering Task Force (IETF) and USENIX, is a member of the British Computer Conversation Society, and is a founding member of the Computer Museum History Center. He has released a small number of his own open source programs and has contributed time and code to the XFree86 project. In his spare time, he likes to play ice hockey, solve puzzles of a mathematical nature, and tinker with Linux.
Chapman is a networking professional in Silicon Valley. He has designed and built Internet firewall systems for a wide range of organizations, using a variety of techniques and technologies. He is the founder of the Firewalls Internet mailing list,and creator of the Majordomo mailing list management package. He is the founder, principal, and technical lead of Great Circle Associates, Inc., a highly regarded strategic consulting and training firm specializing in Internet networking and security. Over the last 15 years, he has worked in a variety of consulting, engineering, and management roles in information technology, operations, and technology marketing for a wide range of employers and clients, including the Xerox Palo Alto Research Center (PARe, Silicon Graphics, Inc. (SGI), and Covad Communiction
Most Helpful Customer Reviews
See all customer reviews
In this day and age, attacks by so-called 'hackers' against companies' internal networks are always a threat and virtually any business, government or educational institution needs to protect itself against this threat. Firewalls (while not 100% safe) offer an excellent protection against such attacks. These attacks as the books can come in many forms, such as 'denial of service' attacks. This updated second edition offers a lot of information about setting up and maintaining a firewall. It describes different types of firewalls, the tools (both software & hardware) you can use to set up your firewall, which Internet services (World Wide Web, electronic mail and netnews, FTP, telnet, teleconferencing, etc) you can decide to put through a firewall, and maintaining it once the firewall has been set up. There's a lot of good common-sense information in here too, when it talks about how you go about deciding what should and shouldn't be protected, who will have access to which services, what kind of security policies to set up, and what to do when you do have any type of 'break-in.' I learned quite a bit about firewalls from this book and anyone who needs to learn about firewalls should get a copy of this book if they already haven't.