- Shopping Bag ( 0 items )
Ships from: Fort Worth, TX
Usually ships in 1-2 business days
Ships from: North Dartmouth, MA
Usually ships in 1-2 business days
Ships from: Blawnox, PA
Usually ships in 1-2 business days
Ships from: Chatham, NJ
Usually ships in 1-2 business days
Ships from: acton, MA
Usually ships in 1-2 business days
|Ch. 1||A fast track guide to ASP.NET||1|
|Ch. 2||Understanding the .NET framework||25|
|Ch. 3||The .NET languages||69|
|Ch. 4||Writing ASP.NET pages||115|
|Ch. 5||Server controls and validation||167|
|Ch. 6||ASP.NET web form controls||213|
|Ch. 7||List controls and data binding||259|
|Ch. 8||Introducing .NET data management||333|
|Ch. 9||Working with relational data||399|
|Ch. 10||Updating relational data sources||451|
|Ch. 11||XML data management in .NET||513|
|Ch. 12||Web applications and global.asax||561|
|Ch. 14||Securing ASP.NET applications||683|
|Ch. 15||Working with collections and lists||755|
|Ch. 16||Working with other base classes||803|
|Ch. 17||.NET components||849|
|Ch. 18||Building ASP.NET server controls||885|
|Ch. 19||Exposing web services||951|
|Ch. 20||Using web services||999|
|Ch. 21||Mobile controls||1057|
|Ch. 22||Tracing, error handling, debugging, and performance||1119|
|Ch. 23||Migration and interoperability||1157|
|Ch. 24||Case study IBuyAdventure.NET||1197|
|App. A||The common system namespaces||1245|
|App. B||Scott Guthrie's top performance tips||1251|
|App. C||Summary of changes to ASP.NET in version 1.1||1257|
|App. D||References and further information||1263|
We've looked at the basics of Microsoft's new .NET Framework and ASP.NET in particular. It changes the way you program with ASP, adding a whole range of new techniques that make it easier to create dynamic pages, web services, and web applications. However, there is one fundamental aspect of almost all applications that we've not yet explored. This is how we access and work with data that is stored in other applications or files. In general terms, these sources of information are called data stores. This chapter looks at how the .NET Framework provides access to the many different kinds of data store that you may have to interface with.
The .NET Framework includes a series of classes that implement a new data access technology that is specifically designed for use in the .NET world. We'll look at why this has come about, and how it relates to the techniques used in ASP. In fact, the new framework classes provide a whole lot more than just a .NET version of ADO. Like the move from ASP to ASP.NET, they involve fundamental changes in the approach to managing data in external data stores.
While data management is often assumed to relate to relational data sources such as databases, we will also explore the other types of data that are increasingly encountered today. There is extended support within .NET for working with Extensible Markup Language (XML) and its associated technologies. Apart from comprehensive support for the existing XML standards, .NET provides new ways to handle XML. These include integration between XML and traditional relational data access methods.
So, the topics for this chapter are:
The various types of data storage used today and into the future
The need for another data access technology
An overview of the new relational data access techniques in .NET
An overview of the new techniques for working with XML in .NET
Choosing an appropriate data access technology and a data format
Let's start with a look at the way data is stored and accessed today.
Data Stores and Data Access
The term data store usually meant a database of some kind. Databases were usually file-based, often using fixed-width records written to disk - rather like text files. A database program or data access technology read the files into buffers as tables, and applied rules defined in other files to connect the records from different tables together. As technologies matured, relational databases evolved to provide better storage methods, such as variable-length records and more efficient access techniques.
However, the basic storage medium was still the database - a specialist program that managed the data and exposed it to clients. Obvious examples are Oracle, Informix, Sybase, DB2, and Microsoft's own SQL Server. All are enterprise-oriented applications for storing and managing data in a relational way.
At the same time, desktop database applications matured and became more powerful. In general, this type of program provides its own interface for working with the data. For example, Microsoft Access can be used to build forms and queries that can access and display data in very powerful ways. They often allow the data to be separated from the interface over the network, so that it can reside on a central server. But, again, we're still talking about relational databases.
Moving to a Distributed Environment
In recent years, the requirements and mode of operation of most businesses have changed. Without consciously realizing it, we've moved away from relying on a central relational database to store all the data that a company produces and needs to access. Now, data is stored in email servers, directory services, Office documents, and other places - as well as the traditional relational database.
The move to a more distributed computing paradigm means that the central data store, running on a huge computer in an air-conditioned IT department, is often only a part of the whole corporate data environment. Modern data access technologies need to be able to work with a whole range of different types of data store, as shown in Figure 8-1.
You can see that the range of storage techniques has become quite quite wide. It's easy to see why the term database is no longer appropriate for describing the many different ways that data is often stored today. Distributed computing means that we have to be able to extract data in a suitable format, move it around across a range of different types of network, and change the format of the data to suit many different types of client device.
The next section explores one of the areas where data storage and management is changing completely - the growth in the use of XML.
XML - A Data Format for the Future?
One of the most far-reaching of the new ideas in computing is the evolution of XML. The World Wide Web Consortium (W3C) issued proposals for XML some three years ago (at the time of writing), and these have matured into standards that are being adopted by almost every sector of the industry.
XML scores when it comes to storing and transferring data - it is an accepted industry standard, and it is just plain text. The former means that we have a way of transferring and exposing information in a format that is independent of platform, operating system, and application. Compare this to, for example, the MIME-encoded recordsets that Internet Explorer's Remote Data Service (RDS) uses. Instead, XML means that you don't need a specific object to handle the data. Any manufacturer can build one that will work with XML data, and developers can use one that suits their platform, operating system, programming language, or application.
XML is just plain text, and so you no longer have to worry about how to store and transport it. It can be sent as a text file over the Internet using HTTP (which is effectively a 7-bit only transport protocol). You don't have to encode it into a MIME or UU-encoded form. You can also write it to a disk as a text file, or store it in a database as text. OK, so it often produces a bigger file than the equivalent binary representation, but compression and the availability of large cheap disk drives generally compensate for this.
Applications have already started exposing data as XML in many ways. For example, Microsoft SQL Server 2000 includes features that allow you to extract data directly as XML documents, and update the source data using XML documents. Databases such as Oracle 8i and 9i are designed to manipulate XML directly, and the most recent office applications like Word and Excel will save their data in XML format either automatically or on demand.
XML is already directly ingrained into many applications. ASP.NET uses XML format configuration files, and web services expose their interface and data using an implementation of XML called the Simple Object Access Protocol (SOAP).
Other XML Technologies
As well as being a standard in itself, XML has also spawned other standards that are designed to interoperate with it. Two common examples are XML Schemas, which define the structure and content of XML documents, and the Extensible Stylesheet Language for Transformation (XSLT), which is used to perform transformations of the data into new formats.
XML schemas also provide a way for data to be expressed in specific XML formats that can be understood globally, or within specific industries such as pharmaceuticals or accountancy applications. There are also several server applications that can transform and communicate XML data between applications that expect different specific formats (or, in fact, other non-XML data formats). In the Microsoft world, this is BizTalk Server, and there are others such as Oasis and Rosetta for other platforms.
Just Another Data Access Technology?
To quote a colleague of mine, "Another year, another Microsoft data access technology". We've just got used to ActiveX Data Objects (ADO), and it's all-change time again. Is this some fiendish plan on Microsoft's behalf to keep us on our toes, or is there a reason why the technology that seemed to work fine in previous versions of ASP is no longer suitable?
In fact, there are several reasons why we really need to move on from ADO to a new technology. We'll examine these next, then later on take a high-level view of the changes that are involved in moving from ADO to the new .NET Framework data access techniques.
.NET Means Disconnected Data
You've seen a bit about how relational databases have evolved over recent years. However, it's not just the data store that has evolved - it's the whole computing environment. Most of the relational databases still in use today were designed to provide a solid foundation for the client-server world. Here, each client connects to the database server over some kind of permanent network connection, and remains connected for the duration of their session.
For example, with Microsoft Access, the client opens a Form window (often defined within their client-side interface program). This form fetches and caches some or all of the data that is required to populate the controls on the form from the server-side database program, and displays it on the client. The user can manipulate the data, and save changes back to the central database over their dedicated connection.
For this to work, the server-side database has to create explicit connections for each client, and maintain these while the client is connected. As long as the database software and the hardware it is running on are powerful enough for the anticipated number of clients, and the network has the bandwidth and stability to cope with the anticipated number of client connections, it all works very well.
But when this is moved to the disconnected world of the Internet, it falls apart very quickly. The concept of a stable and wide-band connection is hard enough to imagine, and the need to keep this connection permanently open can quickly cause problems to appear. It's not so bad if you are operating in a limited-user scenario, but for a public web site, it's obviously not going to work out.
In fact, there are several aspects to being disconnected. The nature of the HTTP protocol that is used on the Web means that connections between client and server are only made during the transfer of data or content. They aren't kept open after a page has been loaded or a recordset has been fetched.
On top of this, there is often a need to use the data extracted from a data store while not even connected to the Internet at all. Maybe while the user is traveling with a laptop computer, or the client is on a dialup connection and needs to disconnect while working with the data then reconnect again later.
This means that we need to use data access technologies where the client can access, download, and cache the data required, then disconnect from the database server or data store. Once the clients are ready, they then need to be able to reconnect and update the original data store with the changes.
Disconnected Data in N-Tier Applications
Another aspect of working with disconnected data arises when you move from a client-server model into the world of n-tier applications. A distributed environment implies that the client and the server are separate, connected by a network. To build applications that work well in this environment, you can use a design strategy that introduces more granular differentiation between the layers, or tiers, of an application.
As Figure 8-2 shows, it's usual to create components that perform the data access in an application (the data tier), rather than having the ASP code hit the data store directly. There is often a series of rules (usually called business rules) that have to be followed, and these can be implemented within components.
They might be part of the components that perform the data access, or they might be separate - forming the business tier (or application logic tier). There may also be a separate set of components within the client application (the presentation tier) that perform specific tasks for managing, formatting, or presenting the data.
The benefits of designing applications along these lines are many, such as reusability of components, easier testing, and faster development.
Let's take a look at how this influences the process of handling data. Within an n-tier application, the data must be passed between the tiers as each client request is processed. So, the data tier connects to the data store to extract the data, perhaps performs some processing upon it, and then passes it to the next tier. At this point, the data tier will usually disconnect from the data store, allowing another instance (another client or a different application) to use the connection.
By disconnecting the retrieved data from the data store at the earliest possible moment, we improve the efficiency of the application and allow it to handle more concurrent users. However, it again demonstrates the need for data access technologies that can handle disconnected data in a useful and easily manageable way - particularly when we need to update the original data in the data store.
The Evolution of ADO
Pre-ADO data access technologies, such as Data Access Objects (DAO) and Remote Data Objects (RDO) were designed to provide open data access methods for the client-server world - and are very successful in that environment. For example, if you build Visual Basic applications to access SQL Server over your local network, they work well.
However, with the advent of ASP 1.0, it was obvious that something new was needed. It used only active scripting (such as VBScript and JScript) within the pages, and for these a simplified ActiveX or COM-based technology was required. The answer was ADO 1.0, included with the original ASP installation. ADO allows you to connect to a database to extract recordsets, and perform updates using the database tables, SQL statements, or stored procedures within the database.
However, ADO 1.0 was really only an evolution of the existing technologies, and offered no solution for the disconnected problem. You opened a recordset while you had a connection to the data store, worked with the recordset (maybe updating it or just displaying the contents), then closed it and destroyed the connection. Once the connection was gone, there was no easy way to reconnect the recordset to the original data.
To some extent, the disconnected issue was addressed in ADO 2.0.
Excerpted from Professional ASP.NET 1.1 by Alex Homer Dave Sussman Rob Howard Brian Francis Karli Watson Richard Anderson Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.