
An Executive's Guide to Information Technology: Principles, Business Models, and Terminology
386
An Executive's Guide to Information Technology: Principles, Business Models, and Terminology
386Hardcover(First Edition)
-
SHIP THIS ITEMIn stock. Ships in 1-2 days.PICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
Related collections and offers
Overview
Product Details
ISBN-13: | 9780521853361 |
---|---|
Publisher: | Cambridge University Press |
Publication date: | 05/17/2007 |
Edition description: | First Edition |
Pages: | 386 |
Product dimensions: | 6.69(w) x 9.61(h) x 0.87(d) |
About the Author
Dr Stephen Murrell obtained a D. Phil in Computation in 1986 from the Oxford University's Programming Research Group. He is currently a lecturer in Computer Engineering at the University of Miami, where he specializes in teaching Programming, Algorithms, and Operating Systems. His primary area of research is in Programming Languages.
Read an Excerpt
Cambridge University Press
978 0 521 85336 1 - An Executive's Guide to Information Technology: - Principles, Business Models, and Terminology - by Robert Plant and Stephen Murrell
Execrpt
Introduction
In writing this book, we have drawn upon our experiences as professors, consultants, and technologists to provide a resource for executives, students, and other readers who have a desire to achieve a rapid understanding of a technology, a computer-related process methodology, or a technology-related law.
The book provides not only the definitions for over 200 terms, but also a concise overview of the term, the associated business value proposition, and a summary of the positive and negative aspects of the technology or processes underlying the term.
The book addresses a problem faced by many executives working in the twenty- first century organization, that of understanding technology without needing to become a technologist. Today’s executives use or are responsible for technology in nearly every aspect of their organization’s operations; however, the pace of change in technology, the misinformation provided from informal sources, and general opaqueness of the terminology can be off-putting for executives and managers. In order to help executives overcome these problems, we have drawn upon over twenty years of teaching at the executive level and ourbackgrounds as technologists to provide clear, understandable descriptions for the most important technology and terminology in use within today’s organizations.
The executives’ need to understand technology is undeniable, but their role dictates that they must understand technology at two levels. Firstly, they must understand what the technology actually is, and this is addressed in our text by the provision of an overview section for each term. This section aims to cut through the “technobabble” and the “spin” and to provide a solid basis of understanding the technology, such that executives can appraise the role of a technology within the IT organization and the organization as a whole.
A second aspect of an executive’s role is to understand the business value proposition behind the technologies, and this is addressed in the second section presented for every term. This enables executives to come up to speed quickly on the major issues and how a technology fits into the larger role it plays in their enterprise and beyond; again references are provided to enable further understanding to be gained.
A third aspect of today’s executives’ function is to understand their obligations with respect to legislation and regulatory frameworks such as Sarbanes–Oxley and the HIPAA acts. The book addresses this by including a set of UK and US legislative requirements under which organizations and their executives in those jurisdictions must operate.
Finally, each item is concluded with concise summaries of the positive and negative aspects generally associated with the technology. This allows executives to ascertain very quickly the key points associated with the technology, law, or process model.
While no book on technology can be exhaustive, we have aimed to provide clear and technically sound descriptions for as large a set of the terms used by technologists, vendors, and users as possible, with the aim of assisting executives and other readers in achieving as rapid an understanding of the technologies, laws, and processes as possible.
Robert Plant, Ph.D., Eur.Ing., C.Eng., FBCS
Stephen Murrell, D.Phil.
ACM (Association for Computing Machinery)
Definition: The Association for Computing Machinery was founded in 1947 to act as a professional organization focused upon the advancement of information-technology skills.
Overview
The ACM acts as a professional organization within the technology industry. The society provides a range of services, including professional development courses, scholarly journals, conferences, sponsorship of special interest groups (SIGs), and sponsorship of public awareness activities that pertain to technology, and to scientific and educational activities. The conferences, peer-reviewed publications, and SIGs provide high-quality forums for the interchange of information in the area of information technology. ACM provides an electronic portal through which a large digital library of technology-related information may be accessed, including academic and professional journals, conference proceedings, and a repository of algorithms.
Business value proposition
The ACM provides a range of resources that can assist technology professionals in developing their skills. These include continuing education programs, a high-quality database of technology literature, professional conferences, and networking events. The peer-reviewed literature includes an extensive library of algorithms allowing technology professionals to use tested and proven algorithms rather than having to reinvent them. The ACM also provides ethical guidelines for its membership and monitors public policy issues in the area of technology and science. Access to the resources of the ACM provides an IT professional with a quick, effective, and efficient mechanism through which the continuously changing fields of computer science and information technologies can be monitored.
Summary of positive issues
The ACM provides a large set of high-quality resources for the technology community. They promote ethical standards for the IT community and provide resources to assist in the governance issues of IT organizations.
Summary of potentially negative issues
Membership of a professional organization such as the ACM is optional, not compulsory as is the case in the older professions of medicine (e.g., American Medical Association, British Medical Council), law (e.g., the American Bar Association), and traditional engineering. There is no equivalent sanction to being “disbarred” for computing professionals.
References
http://acm.org
ACM, 1515 Broadway, 17th Floor, New York, NY 10036, USA.
Associated terminology: AIS, BCS, IEEE, W3C.
Advertising
Foundation concepts: Web, e-Commerce.
Definition: Web-based advertising covers many techniques and technologies, each of which is intended to attract the user to take notice of the advertising and, where appropriate, act upon that advert.
Overview
Advertising via the internet has been growing in popularity and sophistication since the internet was deregulated in 1995 and has spawned a series of advertising initiatives. An early form of advertising was the banner ad, a simple block at the top of a web page announcing a service or product; when the user clicked upon it, it acted as a hyperlink taking them to the new site and presenting more information on the product, and, of course, an opportunity to buy it. The advertiser would then pay the original web site for the traffic that “clicked through,” and possibly pay more should a successful transaction occur.
Banner ads proved in many cases to be less effective than desired and so the industry started to create personalization software based upon individuals’ online web-surfing activities, the goal being to present specific products and services based upon users’ past preferences. These personalization systems did draw criticism from civil liberties groups concerned that they could profile individuals, and selling the information would be an infringement of the right to personal privacy.
As the internet evolved, more sophisticated advertising developed, including the infamous Pop-up ad, an intrusive message that suddenly appears upon the screen in its own window. Pop-ups are often embedded Java applets or JavaScript programs that are initiated from the web page as it is viewed; usually the pop-up window can be closed, but some of the more persistent pop-up advertising cascades so that when one window is closed another opens, slowly taking over the machine’s resources. It may be necessary to shut the machine down completely to escape this situation. Careful control of the web browser’s security settings can usually prevent the problem from occurring in the first place.
A form of advertising that is also intrusive, and in some instances criminal, is Adware (the term is used to denote a certain type of intrusive advertising but is also the registered and trademarked name of a company that sells anti-spyware and anti-spam software). Adware takes the form of pop- up advertisements or banner ads and comes from several sources. One source is certain “freeware” or “shareware” products that vendors give away without charge but have embedded advertising into the code so that it suddenly appears upon the screen and may or may not be removable, or could be subject to a time-delayed closure. As a condition of use, some adware programs go on to install programs on the user’s machine, known as Adbots, and these programs then act as a type of Spyware sending back information to their advertising source pertaining to the user’s behavior. Often these additional installations are undisclosed.
Illicitly installed software may redirect web browsers to access all pages through a proxy, which can add banner advertisements to totally non-commercial pages that in reality bear no such content. Home-page hijacking is also a common problem: a web browser’s default or home page may be reset to a commercial site, so that, every time the web browser is started up, it shows advertisements or installs additional adware.
Business value proposition
Well-placed internet advertising can be of significant benefit to the company or individual sending it out. In the 2000 US presidential election the Republican Party’s use of email to spur its registered voters in the State of Florida to go out and vote has been acknowledged as influential in George W. Bush’s closely contested victory over Al Gore, and changed the nature of political campaigning. The internet can also be used as a non-intrusive distribution model for advertising that is driven by user demand, an example of which was the advertising campaign by BMW, which used top film directors, writers, and famous actors to create short films in which their vehicles were as much the stars as the actors. The “BMW films” series proved to be hugely popular and reinforced the brand.
Many early dot-com-era (1995–2000) companies attempted to finance their business through the use of a Click Through advertising model, but were unsuccessful since the revenues generated were in most cases insufficient to sustain the business. Online advertising has been used effectively by some companies, a prime example being Google, as a revenue source aligned to a successful business model and popular product.
Many companies now offer pop-up blockers, spam blockers, and other anti-advertising software, and hence the ability of advertisers to reach their audience legally has been curtailed. With the continuous development of new technologies, the opportunity to create new advertising media continues to develop in tandem.
Summary of positive issues
Online advertising is a relatively low-cost mechanism for the dissemination of commercial messages. The basic technology behind internet advertising is well known and continues to evolve, and consequently fresh approaches are constantly being developed. Non-intrusive internet advertising, where consumers pull the advertising rather than the advertising being pushed onto them, can be very successful. Forms of advertising that don’t work will (with any luck) very quickly die out.
Summary of potentially negative issues
Some forms of advertising are illegal, spam for example. Internet advertising is frequently thought of as annoying and intrusive by the recipient. Anti-advertisement software and systems are continually being improved to block intrusive advertising. It is difficult to imagine why any customer would do business with a company that took over their computer with cascading pop-up advertisements, and corporations would be well advised to think very carefully before embarking upon such a campaign.
Reference
R. L. Zeff (1999). Advertising on the Internet (New York, John Wiley and Sons).
Associated terminology: Internet, JavaScript, Spam, Spyware, Web browser.
Agent
Foundation concepts: Artificial intelligence.
Definition: A self-contained software component acting on the behalf of a human controller.
Overview
An agent, sometimes also called a “bot,” is a software application intended to perform some service for a human controller without requiring constant supervision. A simple application could be to carry out a particular bidding strategy in an online auction. After being programmed with the human controller’s desires, an agent could constantly monitor the state of the auction, reacting to competing bids as instructed, without any intervention; the human controller would be free to get on with life in the meantime.
Another valuable application of agents, that is only just entering the realms of possibility, is in information gathering and filtering. The internet provides a vast quantity of news and information sources, constantly being renewed and updated. It is impossible for a human reader to monitor even a small fraction of it all; even after identifying a moderately sized set of sources for a particular area of interest, keeping up to date with new developments requires the devotion of significant periods of time. If an agent were programmed to recognize articles that would be of interest, it could maintain a permanent monitoring presence, filtering out just the few items of genuine interest, and presenting them to the human reader on demand. This use of agent technology could provide a major increase in personal productivity, but requires that the natural language problem be mostly solved first. The problem is that software must be capable of fully understanding human prose, not just recognizing the words, but determining the intended semantics, and that is still the subject of much current research.
It is generally envisaged that agents will not be static presences on the human controller’s own computer, but should be able to migrate through the network. This would enable an agent to run on the computer that contains the data it is reading, and would therefore make much better use of network bandwidth. It does, however, introduce a serious security and cost problem: agents must have some guarantee of harmlessness before they would be allowed to run remotely, and the expense of providing the computational resources required would have to be borne by somebody.
The idea of an agent as a meaningful representative of a person, acting on their behalf in business or personal transactions, is still very much a matter for science fiction. The problem of artificial intelligence, providing software with the ability to interact with humans in a human-like manner, is far from being solved. It is believed by some that a large number of relatively simple agents, in a hierarchy involving higher-level control agents to coordinate and combine efforts, may be the best way to create an artificial intelligence. This theory remains unproven.
Business value proposition
The use of agents in business has some degree of controversy attached to it. Simplistic agents have been used for some time to carry out stock market transactions. If it is desired to sell a particular stock as soon as it crosses a particular price threshold, constantly monitoring prices to catch exactly the right moment would require a lot of human effort. This is exactly the kind of condition that can easily and reliably be detected and acted upon by very simple software. However, it is widely believed that the extreme speed of these agents has a destabilizing effect on the markets, and may even have been responsible for some minor market crashes. The slowness of human reactions, and the need for a real person to make orders, allows common sense to slip in where it could not be provided by a simple automatic agent.
Agents are also used by some online financial services such as automobile-insurance consolidators, who send their bots out to the sites of other companies. The bots fill out the forms on those companies, retrieve a quote and then these quotes are all displayed on the consolidator’s site. The unauthorized use of bots on third-party sites is prohibited by many organizations; however, preventing their access to a web site open on the internet can be almost impossible.
Summary of positive issues
Agents can be created and used to automate processes. Agents can be sent out over a network to collect and return information to their owner.
Summary of potentially negative issues
Agent technology is primitive and only well-structured relatively simplistic tasks can be performed safely. Bots and agents are very unwelcome on many web sites and networks, and web site providers should be aware that other organizations may make covert use of their online resources, presenting the results as their own; that can result in an unexpected increase in bandwidth and server load when a popular service is provided.
Reference
J. Bradshaw (1997). Software Agents (Cambridge, MA, MIT Press).
Associated terminology: Artificial intelligence, Natural language processing.
AIS (Association for Information Systems)
Definition: The Association for Information Systems (AIS) was founded in 1994 and acts as a professional organization for academics who specialize in information systems.
Overview
The AIS acts as a professional organiza- tion within the technology industry. The aim of the organization is “to advance knowledge in the use of information technology to improve organizational performance and individual quality of work life.” (www.aisnet.org). The society provides a range of services, including professional development cour- ses, scholarly journals, conferences, and the sponsorship of special interest groups (SIGs). The AIS is primarily focused on the role of information systems in a business context, and is closely aligned to servicing the needs of management information systems (MIS) professionals and academics who study the field of MIS. The AIS, which merged with ISWORLD (Information Systems World) in 1998, also merged its major conference, the AIS Conference on Information Systems, with the highly respected International Conference on Information Systems (ICIS) in 2001. ICIS provides a forum for the presentation of high-quality papers and acts as the major annual academic recruiting conference for MIS faculty.
Business value proposition
The AIS provides MIS professionals with a set of resources through which their professional skills may be developed. These resources range from continuing education courses and access to a high-quality database of technology literature to professional conferences.
The AIS provides a link between acade- mic research and industrial practice within the field of management information systems. The society publishes two peer-reviewed journals, the Journal of AIS and Communications of AIS, and has over 20 specialist groups, including IT in Healthcare, Enterprise Systems, E-Business, and Accounting Information Systems. The specialty groups provide forums through which academic–industrial research collaborations are undertaken.
Summary of positive issues
The AIS provides a set of high-quality resources for academics and practitioners within the domain of management information systems.
Summary of potentially negative issues
Membership of a professional organization such as the AIS is optional, not compulsory as is the case in the older professions of medicine (e.g., American Medical Association, British Medical Council) and law (e.g., the American Bar Association).
References
http://www.aisnet.org/.
Association for Information Systems, P.O. Box 2712, Atlanta, GA 30301-2712, USA.
Associated terminology: ACM, BCS, IEEE, W3C.
Algorithm
Foundation concepts: Program, Formal methods.
Definition: A set of explicit, unambiguous instructions for solving a problem or achieving a computational result.
Overview
An algorithm is, in a way, a solution to a problem. It is a complex of instructions, which, if followed exactly, will produce a particular result. The method for long multiplication as taught in elementary schools is a good example of an algorithm. The overall task, multiplying together two large numbers, is not something that a person can normally do in one step; we need to be taught a method. The particular way of performing the task, running through the digits of the multiplier one by one, multiplying each by every digit of the multiplicand, writing the resultant digits in a particular pattern, and adding them all up in columns at the end, is an algorithm for multiplication.
Since computers are simply devices that are good at obeying explicit unambiguous sequences of instructions, algorithms are the logical constructs that programs embody. One of the key tasks of programming a solution to a problem is designing a good algorithm.
Just as there are many different ways to solve problems in real life, there are often many different algorithms available to a programmer for one task. An office worker who has to alphabetize a pile of forms may quickly fall into a particular pattern, and get through the job almost mechanically.
(1) | Start with all the forms in a pile on the left, and an empty space for a new pile on the right. | |
(2) | Pick up the first form and set it aside. | |
(3) | Scan through all of the remaining forms in the first pile, comparing each with the one that was set aside; when you find a form in the left pile which belongs after the one set aside (in alphabetical order) then swap them: set aside the one from the pile, and put the previously set-aside one back in the pile. | |
(4) | Put the set-aside form on top of the new (right) pile. | |
(5) | If the original (left) pile is not empty, go back to step (2) and repeat. |
This is a sequence of instructions that could easily be followed by a worker who has no idea how to sort forms, or, indeed, no idea of what “sorted” means. Without requiring any intelligence, this procedure allows anyone to perform the task.
Of course, it is necessary to know how to “pick up” a form, and how to compare two forms to see which comes first in alphabetical order. Office workers know how to perform these activities, but it is possible to produce an even more detailed algorithm for these subtasks too.
That is the nature of an algorithm. Every required action must be spelled out, so that the task may be performed by a worker who has no understanding of it. At some point it must be assumed that the worker knows something. They must be capable of following instructions for example, so algorithms do not regress into an infinite fuzz of infinitesimal detail. Even computers already know how to do some things: they can add and subtract and follow instructions. An algorithm for computer execution simply instructs how to perform a task in terms of subtasks that the computer already knows.
The five-step sorting algorithm detailed above is a realistic one that people do use, and it works. Unfortunately, it is satisfactory only for small piles of forms. If you start with just ten forms, the first is set aside, and compared with all nine of the others, to find the one that goes on top of the second pile. Then, the first of the remaining nine is set aside and compared with all eight of the others, to find the next. Then the first of the remainder is compared with all seven of the rest, and so on and so on. By the end of the procedure 45 individual comparisons will have been needed, which does not seem so bad. However, if you started with 20 forms in the pile, 190 individual comparisons would be needed: twice the number of forms, but four times the work. For large piles of forms, this algorithm can be impossibly slow. A mere ten thousand forms would require nearly fifty million comparisons.
It would be an absolute disaster if programmers had to rediscover this well-known fact every time a programming task called for inputs to be sorted. Repeated experience tells us that most programmers do not notice that the simple sorting method is too slow for large data sets, and, of those who do notice, very few indeed are able to work out a significantly faster alternative. One of the essential parts of a formal training in programming is a long and demanding study of the large collection of algorithms that have already been discovered and analyzed, together with the Data Structures(carefully tailored, seemingly unnatural ways of organizing data for effective access) that go with them. As with any other engineering profession, it is impossible to do a good job without a thorough knowledge of what has been tried before. If a programmer starts the job fully armed with what is already known, they will have some chance of finding something new. Inventiveness is important: not all problems have been seen before. A programmer who does not already know the standard algorithms and data structures is doomed to nothing more than rediscovering the basics.
Business value proposition
Many of the algorithms and data structures for standard and interesting problems are thoroughly documented and analyzed in sources such as those provided by ACM, and the standard textbooks well known to all trained programmers. Knowledge of established algorithms not only gives programmers the tools necessary for solving standard problems without having to “reinvent the wheel” at every step, but also provides an essential foundation for designing truly new solutions. As Isaac Newton in 1676 explained his success in inventing calculus and understanding gravity, “If I have seen further it is by standing on the shoulders of giants.”
Algorithms are an abstraction, a high-level view of a method, that enables programmers to construct and investigate a design to solve the computational problem independently of the final implementation language. This description may be written in a style that is easier to read than finished program code, and thus amendments to the design can be made in a more informed manner than would be possible by examining the code. Knowledge of algorithms and associated data structures also enables programmers to determine the efficiency of their solutions. A simple and obvious algorithm that works well in simple cases may be completely unsuitable for commercial use. A professional programmer must thus understand that alternate algorithms exist, and be able to select the appropriate one, perhaps even invent a new one, for any given situation. Further, the computability of an algorithm must also be considered, to ensure that the algorithm is theoreticallysound.
Summary of positive issues
Algorithms are an abstraction of a problem to be solved; thinking of a general algorithm rather than a specific problem allows a programmer to write cleaner, more maintainable code that has a strong chance of being reusable in other projects. An enormous collection of existing fully analyzed algorithms with well-tested implementations is freely available, and a vast range of books is also available on the subject. Design, analysis, and proof of new algorithms continues to be a major direction of research.
Summary of potentially negative issues
Algorithmic design needs qualified specialists in computer science and software engineering. Inappropriate algorithm design can result in inefficient or unworkable solutions being implemented. Lack of knowledge of algorithmic techniques by software professionals is a major cause of inefficiency and poor performance.
References
N. Wirth (1978). Algorithms plus Data Structures Equals Programs(Englewood Cliffs, NJ, Prentice-Hall).
D. E. Knuth (1975). Fundamental Algorithms(New York, Addison-Wesley).
M. Hofi (1975). Analysis of Algorithms(Oxford, Oxford University Press).
Associated terminology: Efficiency, Computability, Programming language.
Analog
Definition: An analog system is one that operates on a continuous scale.
Overview
Most real-world information is not directly perceived in digital form. Sound is really a complex and continuously varying pressure wave in the air, which we “hear” when it impinges upon small hair-like sensors in our ears, setting them aflutter. A visual look at the world reveals an infinitely detailed spectrum of shapes, colors, sizes, and textures. Even simple measurements such as the length of a piece of string or the weight of a potato have unlimited precision: if a string is stated to be 293 mm long, clearly that is just an approximation limited by the ruler used; more careful measurement might reveal it to be 293.1, or 293.097 632 63 mm long.
The contrast between analog and digital is that analog measurements are thought of as corresponding naturally to real-world measurements and having unlimited natural precision, whereas digital measurements are always encoded as a string of digits and are therefore limited in precision by the number of digits written.
[Side note: Modern physics is of the opinion that the Universe actually has a fundamentally digital nature: light travels in particles called quanta, space is somehow granular and can not be infi- nitely divided, even time may really tick rather than flowing continuously. All of these phenomena occur on a scale far too small to be relevant in normal life, and some are still quite controversial even regarding their supposed existence.]
Traditional electronics built with resistors, transistors, vacuum tubes, etc. also has a fundamentally analog nature. A voltage is just as infinitely variable as the length of a piece of string. Before the digital age, electronic and even mechanical analog computers were common. In an electronic analog computer, all physical quantities, whether they are lengths, weights, or anything, are directly represented by a related electronic quantity: voltage, current flow, etc. A device that can add voltages (e.g. inputs of 3.1 V and 2.6 V producing a single output of 5.7 V) can be used to add lengths or weights. Even into the 1970s tide tables were still being created by analog computers.
Analog computers have inherent prob- lems: controlling voltages and currents to a very precise degree is exceptionally difficult and expensive; long-term storage of accurate analog data is practically impossible. Now, everything that could be done by analog computation can be done more accurately and reliably by encoding the analog inputs in digital form and performing a digital computation on it.
The only concession made by a modern computer to the analog “real world” is usually a few analog–digital converters. The term “A to D,” or A/D, is used for a device that takes an analog input (such as sound or voltage) and encodes it in digital form. The term “D to A,” or D/A, is the reverse. A computer with a microphone has a simple A/D device to convert the sound into digital form. A computer with a loudspeaker has a simple D/A device to convert digital signals into audible form.
Business value proposition
Analog computing has largely been margi- nalized by the digital age. Except for very small systems, nearly everything that could be done by an analog system can be done more easily and accurately with a digital system. The analog world does not generally impact modern business computing.
© Cambridge University Press