Moving to Client/Server Appliction Development: Caveat Emptor for Management Copyright CAUSE 1994. This paper was presented at the 1993 CAUSE Annual Conference held in San Diego, California, December 7-10, and is part of the conference proceedings published by CAUSE. Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, that the CAUSE copyright notice and the title and authors of the publication and its date appear, and that notice is given that copying is by permission of CAUSE, the association for managing and using information technology in higher education. To copy or disseminate otherwise, or to republish in any form, requires written permission from CAUSE. For further information: CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301; 303449-4430; e-mail info@cause.colorado.edu 1 Moving To Client/Server Application Development: Caveat Emptor for Management William F. Barry Director of Administrative Computing Dartmouth College 6209 Clement Hall Hanover NH 03755-3574 (603) 646-3601 william.barry@dartmouth.edu Client/server systems architecture is evolving to include a set of concepts and tools that range from many exemplary success stories to some intriguing but unresolved problems. Much of what is being called, or sold, as client/server is confounded by a substantial amount of confusion due to: still maturing designs, standards and tools; as well as vendor, consultant and colleague aggrandizement. This has resulted in a level of expectations about the benefits and appropriate use of client/server which is confused by many myths, misconceptions and incomplete information. This paper presents: a summary of management considerations and recommendations involving moving to client/server application development; and an overview of two mature client/server applications developed and extensively used at Dartmouth College. prepared for presentation at CAUSE93, San Diego, California December 8, 1993 Introduction Caveat Emptor; let the buyer beware! Valuable consumer advice, whether a manager is faced with gold or snake-oil in the client/server marketplace. The preferred architecture of computing applications is continuing to evolve. Made possible by advances in hardware, software and network technologies, the concepts of "Client/Server " computing represent an emerging technology which offers valuable design features worthy of immediate use for many applications. From a perspective of most mainframe modeled systems, moving to client/server represents a dramatic transition involving technological and organizational change that can often be far more complex and costly than many vendors or pioneers admit. The client/server model is still evolving as a set of concepts and tools that range from proven success stories to some intriguing but unresolved application software and operating system problems. As with any change of the magnitude that some consider a "paradigm shift", most of what is being called, or sold, as client/server is confounded by a substantial amount of confusion due to: unresolved problems; still maturing designs and tools; as well as vendor, consultant and colleague aggrandizement. These problems are exacerbated due to the inexperience of computing professionals who are too busy for, or in some cases incapable of, successful training in this new model. This has resulted in a level of expectations about the benefits and appropriate use of client/server which is confused by many myths , misconceptions and incomplete information. Resulting from an effort to sort through these issues in order to reach intelligent decisions on information systems investments, this paper presents: a summary of management considerations and recommendations involving moving to client/server application development; and an overview of two mature client/server applications developed and extensively used at Dartmouth College. Why all the Client/Server Enthusiasm? The attractiveness of Client/Server systems concepts may be associated with its compelling arguments which seek to better leverage desktop computing resources, further empower the independence of users, and take advantage of increased capacity and reliability of networks. Furthermore, the intriguing technical challenges of continuing advances towards achieving the goals of open systems in distributed or cooperative processing applications pose exciting, and frequently solvable system challenges. These factors, considered in contrast with the burdens and frustrations of many of our older legacy systems, may explain some of the initial apparent willingness of many computing professionals to so fervently embrace client/server as the long awaited promised land of information technology. Management decisions involving information technology investments should be driven by prioritized needs, based upon a realistic assessment of costs and benefits. Considerations of the exciting promise of emerging technologies should factor in the wisdom gained from computing's history of exaggerated promises and panaceas. As the initial fervor of client/server enthusiasm has been tempered with experience, our industry has once again been reminded that still maturing technologies need to be approached carefully. Acknowledging that many emerging technologies are built upon compelling and in some cases attractive and thorough systems design principles; the current appropriateness of each of these technologies and trends needs to be judged according to a few criteria. These criteria include: the maturity and reliability of the technology; the costs of adoption; the benefits and tradeoffs relative to other approaches; the availability of enough sufficiently skilled staff; and the balance between a complementary fit with existing systems goals versus the timing of needs to embrace change in existing approaches. Defining the Client/Server Model "What does client/server mean? Is it merely a state of mind, a fashion, a philosophy, an attitude? No; not just. Reports coming back from the bleeding edge tell us that client/server is in fact a technology; tough, complex, incomplete, and not inexpensive. After several years of dynamism, the revolutionary fervor that has surrounded client/server is dissipating. Rather than a clear rational construct, client/server is a nest of new and interrelated challenges. As evidenced by its wide popularity... client/server crosses several (in fact, nearly all) technology boundaries: database, applications, networks, systems, and hardware. And perhaps most importantly, client/server implies dramatically new management approaches to gathering, accessing, and maintaining information itself. " Stodder (1993) One of the frequently referenced models of client/server computing was developed by the Gartner Group (1992). This model divides an application into three logical parts (the user interface presentation, the business function processing logic and the data management) and two physical parts (the client system and the server system). The Gartner model outlines five different styles of client/server distinguished by where the network division of the three logical parts of the application occurs. source: Gartner Group, Inc. These five styles of client/server can be summarized as follows Distributed Presentation: data management, processing logic and presentation components all reside on the server hardware and the presentation component is networked to the users local device (terminal, or desktop computer). Remote Presentation: data management and processing logic components reside on the server hardware and, separated by the network, the presentation component resides on the users local computer. Distributed Logic: data management and some processing logic components reside on the server hardware and separated by the network, additional processing logic plus the presentation component resides on the user's local computer. Remote Data Management: the data management component resides on the server hardware and, separated by the network, the processing logic and the presentation component resides on the users local computer. Distributed Database: some data management components resides on the server hardware and, separated by the network, additional data management components reside on other server hardware or the users local computer. The processing logic and the presentation components resides on the users local computer . Depending on a variety of factors (including the application need, the configuration of hardware, network resources and available software tools), any one of these five styles of client server may represent an appropriate and advantageous use of the client server model. This brief overview of the Gartner Group model is highlighted here only to provide a framework for discussion of this evolving technology. As further client/server experience is gained and the available repertoire of tools evolves, the existing models will be extended or replaced. For example, one variation on the Gartner Group model, reported by Winsberg (1993), states that the issue of distributed database is not directly germane to the model since database management system software should hide the issues of distributed database from the application programmer. "Defining client/server seems to be a new kind of parlor game for the industry. I've heard it described as a style of computing, a collection of technologies, an architectural platform, an application development method, a systems integration solution, a re-engineering tool and - heaven help us all - a paradigm shift." Johnson (1993) Organizational Responsibility and Client/Server In addition to providing models for various configurations of application components, client/server architecture creates an opportunity to move away from the model of centralized responsibility for developing and maintaining systems. Among its more optimistic and ambitious goals, the vision of client/server facilitating opportunities for the flexible independent creation and support of decentralized computing systems can encourage a new model of decentralized system control and responsibility. These opportunities can compliment and further extend the advantages of decentralized computing staff. However, institutional size and information technology budget levels need to be factored into decisions to move towards decentralized responsibility for development and operations of computing applications. Large institutions which have an appropriate number of decentralized competent systems development staff can be effective in using a client/server application architecture to move towards further distributing a systems total development and operational responsibility. A small university or college, having retained the centralized organizational model of supporting administrative systems, can more economically achieve many of the advantages of client/server systems with no change in who is responsible for systems development and support. Partial Summary of Anticipated Benefits of Client/Server * flexibility of independence between application components * reduced later maintenance costs * better utilization of lower cost (per MIP) decentralized computers * elimination of high maintenance costs on older mainframes and minicomputers * separation of some programming tasks (e.g. presentation) from complexities of network or database management system. * reduced dependency on one or a few vendors' proprietary systems environments Summary of Management Considerations and Concerns The essential issues for management are to understand what information system needs are to be met and what are the costs and success factors involved among possible alternative approaches. An informed management strategy needs to understand: what applications will gain the most from a client/server approach?, what are some of the cost and risk factors?, and when should their organization venture into this approach? Despite the obvious facts just stated, it is surprising how many managers seem willing to buy into a change in systems strategy without attempting to assess the facts. Cost Issues Understanding an institution's current level of information technology resources is essential in planning the costs or estimated savings involved in moving to client/server systems. Efforts to plan the costs of client/server applications need to consider: expenses associated with creating or upgrading campus networks; the capacity of installed desktop computers; design complexities of many client/server applications; and the staff learning curve or retraining issues. Trade press and industry consultants {e.g. Gartner Group (1992); Kennedy, et al (1993); Cafasso(1993); and Anthes (1992)} are increasingly reporting that moving to client/server actually increases costs. As client/server methods and tools mature, and after an organization successfully completes the initial learning curve, many of the costs of creating and supporting these systems will drop and the anticipated long-term benefits are expected to outweigh the costs. However, Ambrosio (1993) reports that recent studies completed by the Gartner Group present the conclusion that, when considering the total cost of computing, moving from a mainframe-centric model of application development and support to client/server can cost 50% more than a comparable mainframe-based system. It is important to note that this analysis is based upon the current costs of new mainframe and mid-range hardware and software licenses, not earlier generations of higher-priced mainframes and astronomical platform-based software pricing. A key cost factor in the Gartner Group's equations is the fact that client/server based systems typically have technician labor support costs, per user or workstation, that are ongoing and are higher than the labor support costs of a centralized system. But, for some applications, a move to client/server can be quite justifiable and desirable, as a way to better meet the needs of some applications that have compute-intensive or screen I/O intensive needs that are best localized closer to the users desktop. Therefore, moving to client/server should be considered in terms of the improved functional value delivered for some applications, not as an approach to cost savings. Network Resources Realizing the potential of Client/Server applications architecture depends upon the completeness of adequate network bandwidth to all desktops to be served. Reliable, high speed, high capacity network services are essential to succeed with client/server. Capabilities of 10Mb/sec network speeds, to the desktop, should be considered a minimum for applications of moderate complexity. If multiple desktop applications will each involve concurrent client/server sessions, then higher bandwidth is needed. To accommodate future applications involving the transmission of digitized voice, video or images, higher network speeds will be necessary (e.g. FDDI at 100 Mb/sec or ATM at 150+Mb/sec). If a campus is still being served by only asynchronous networks intended to support host to terminal session access requirements, there are many reasons to emphasize network upgrades as a top priority. As a foundation for client/server applications, reliable network services should include a minimum of: network access to all desired campus constituents; comprehensive electronic-mail; print services; network authentication; and file transfer services. Desktop Resources In most institutions, the deployment of desktop computers has been incremental over a period of several years of rapid expansion of available desktop computing power. These computers have often been selected according to varying assumptions and understandings of desirable device capacity requirements. This has often resulted in an installed base of not-fully-depreciated desktop equipment, that is insufficient to support the memory, CPU speed, I/O channel or disk requirements of client software and/or data needed as part of a new application. New client/server systems may require upgrades or replacements to some or all desktop PCs. If the desktop device is still a terminal, the costs of acquiring desktop PCs or workstations needs to be factored in. Other desktop considerations include: the limits of earlier operating systems (e.g. the memory allocation limitation problems of DOS, or the limits on file-sharing of older version of Mac OS). A suggested minimum client CPU is a 386 class Intel chip (or a Macintosh with an 68030-25MHz ) and, for low-end server CPUs, at least a 486 class Intel chip (or a 68040). Standards and Planned Architecture Successful implementations of client/server applications must be built upon standards of a well-defined systems architecture - if they are to be reliable, scalable, expandable and enduring. A comprehensive strategy defining standard requirements for software, operating systems, data administration, APIs and RPCs, network protocols supported, and hardware configuration is a prerequisite for widespread success with client/server based systems. Such an architecture may include locally defined standards for system component interface requirements or preferably established industry standards. The architecture should include a definition of the intended scope of the problem being addressed and the policies and procedures that will ensure adherence to the architecture's standards. The very real potentials of interoperability, decentralized independent software development or acquisition, and independence from proprietary systems will fail without success at further establishing and adhering to standards. Among the shifting sands of vendor consortium or user group standards definition efforts, the Open Software Foundation's Distributed Computing Environment (DCE) standards are emerging as a set of "vendor-neutral" standard APIs (application programming interfaces) that are demonstrating much promise as a cornerstone of software interfaces upon which to implement client/server software. One key remaining weakness of OSF's efforts have been the delays in completing their set of Distributed Management Environment (DME) standards which are intended to address many system and operation management needs. DME is anticipated to be completed in 1995. Despite the efforts of OSF and many other standards shaping organizations, achieving standards that are comprehensive to any domain, and then accomplishing vendor adherence to those standards is a slow and painful process, at best. Despite the tremendous advances that efforts to achieve open systems have successfully delivered and the examples of successful realization of standards facilitated interoperability, open systems are not yet open! Given the politics of change plus the free-market economic forces upon which a vendor's product differentiation in the marketplace can determine market share and survival, progress towards standards and the commoditization of software components will continue to move ahead slowly, at best. One way that this can create a problem for client/server is referred to by Roti (1993) as "versionitis... the software malady that causes version 1.2 of product X to work only with version 2.1 of product Y and not with version 2.0 or 2.2. In client/server systems there may be as many as six or seven distinct pieces of software between the user and the database, there may be only one specific version of each of those pieces that works correctly with the others. Change any piece and the whole thing stops working. " Considering the lack of sufficient debugging tools in many client/server application development environments, this problem of "versionitis" creates a significant cause for concern. Roti concludes that, given the not-yet-realized promise of 'open systems', it may be wise to minimize the number of vendors involved in components of a client/server application. Data Considerations The ideals that many vendors and systems designers are pursuing envision an institution's logically integrated database comprised of networked, distributed application databases created and managed on different but openly accessible software and hardware platforms. According to this model, the integrated databases, some of which may exist on physically separate hardware platforms, can be made to appear to any client program as a single uniform system resource. Some of the advantages of this model include: * the ability to accommodate a department's local data needs in ways that coexist with solutions to institutional data needs * the utilization of less expensive distributed hardware options that can be independently scaled to meet localized processing requirements * opportunities for reduced network traffic when localized data needs can be met by a departmental server * increased opportunities to design and tune database and hardware resources towards specialized needs (e.g. high volume transaction updates versus read-only query access) * greater vendor independence Some serious disadvantages (or remaining flaws) of this model include: * the advantages of mixing and matching most current database applications comes at a cost of complexity in making distributed applications work together. In the current state of these technologies, transparent interoperability is frequently only partially realized. * the current state of systems management tools to support coordination of distributed heterogeneous databases is both weak and incomplete. Problems of configuration management, operations management, transaction journaling, auditability, contingency planning, security authentication and access controls remain to be properly resolved. * offsetting the hardware budget savings of less expensive localized departmental servers are the increased labor costs of managing networked distributed hardware. It is also anticipated that, simultaneous with the maturation of the complex tools and design methods required with distributed databases, there will be a continuation of the trend towards lower cost of larger electronic, magnetic and optical data storage devices, faster and higher capacity data I/O buses and more powerful single and parallel CPUs. More progress towards the outcome of these sets of evolving technologies should be achieved prior to any substantial move towards distributed databases, unless there are other factors to justify such a change. Issues of data administration should also be considered. As database management system (DBMS) tools continue to mature, much progress has been made in preserving the integrity and consistency of distributed databases. However, prior to worrying about which DBMS vendor has solved "two-phase commit" problems of updating a transaction across a distributed dataset, or before assessing whether a vendor's product can handle data rollback across heterogeneous databases, many organizations should continue efforts to reconcile their lack of successful data administration involving their central systems. These challenges won't get any simpler to solve if databases become more distributed. This is not an argument against client/server. It is a reminder of the need to judge an organizations position relative to newer technology against the degree of success to which more mature methods and technologies have been applied! Is a reluctance to move to distributed databases running against a real or perceived trend towards "downsizing and decentralizing" all of computing? Yes. Is this a sound position or is this just a resistance to change and a perpetration of the old model by a shortsighted "mainframe bigot"? According to Gillan (1993), when asked about the future of centralized data storage, Bill Gates (founder and CEO of Microsoft) stated, "We're in the information age and that means there'll be a lot of information. There are still large economies of scale in storage costs and administration of centralizing that data, and as fiber brings communication costs down, you'll be able to pool a lot of that data in one place... The productivity application world and the data center world are not separate any more". Systems Management in a Heterogeneous Environment In contrast with available systems management tools for use on mini or mainframe computers, an area of concern in the client/server arena involves the lack of complete and reliable tools for a wide array of important systems management tasks. These missing or incomplete tools include operating system or application utilities to support operations scheduling and control; audit tools; rollback journaling; backup and recovery tools; performance monitoring and capacity planning tools; and change control utilities. Staffing Issues Experience and skill levels of current IS staff and the costs of new training represent a substantial current obstacle to the adoption of client/server methods. Anthes (1992) reports that the Cambridge Massachusetts research firm Forrester Research Inc. found that 75% of Fortune 1000 firms included in their survey lack the skills needed to work with client/server based systems. Exacerbating this problem is the burden on current staff to support required production maintenance and the ongoing stream of enhancements to existing systems. While it is wise to encourage the highest potential of our staff, it is also essential to acknowledge limits. Many of the very skills that made some 3GL programmers successful (e.g. linear procedural thinking), seem to get in the way when more abstract reasoning requirements of the multi-layers of software and data manipulation control become involved. Most data processing professionals should be given every chance to make the leap to client/server, and management needs to find ways to fund and nurture that effort. However, there remains the pragmatic reality of some staff who can't or don't choose to keep up with new technologies. For example, consider the difficulties of some staff to move from record oriented to set processing, or the slowness of introducing many software engineering methods, and the difficulty some have with either full or partial data normalization. These past perfomances should give a manager second thoughts about the future career paths of some programmers and systems staff. Separate from issues of skills retraining, the fact remains that there is a shortage of computing professionals with the proper mix of experience with complex applications development and many of the talents needed to design and build client/server systems. Some Systems Are Not Appropriate to Current Client/Server Technologies All forms of centralized systems are not going away. The trend of greater MIPS per dollar spent on smaller machines does not, by itself, dictate that the model of the central computer is no longer of value. In nearly all cases cited in business management or computing trade journals, a business cost analysis which compares the relative cost and capability of mainframe or mini computers to the alternatives are comparing old machines to newer more powerful ones. While the old model of centralized million dollar hardware generating unacceptable recurring maintenance costs to support high-priced application software is being replaced by the competitiveness of newer economies of hardware and software, the appropriate role of centralized computing applications and operations continues to provide economic and operational advantages for many institutional administrative information systems needs. The lower prices of desktop and workstation software are forcing down the previously astronomical pricing structures associated with mainframe or minicomputer software. The marketplace is forcing most applications vendors to shift to per-user software licensing rather than purely machine- sized pricing. The systems management tools necessary to administer distributed systems do not yet compare favorably with those available on single clustered systems. For small universities and colleges, having most business functions located within a single campus, centralized systems continue to provide the most efficient economies of scale for many institutional administrative systems.* When costs of decentralized hardware, dis-economies of scale in vendor negotiation, programming or support staff, and non-coordinated data definition are considered, many distributed systems are proving to be more costly than centrally developed and managed systems. As Blythe (1993) reported in the CAUSEASM listserv's summary of an electronic roundtable discussion on "Client-Server Computing: Management Issues" Ted Klein, President of the Boston Systems Group writes that there are five types of systems that are NOT applicable to downsizing with today's technologies: 1. applications with large databases which cannot be easily partitioned and distributed 2. applications that must provide very fast database response to thousands of users 3. applications that are closely connected to other mainframe applications 4. applications that require strong, centrally managed security and other services 5. applications that require around-the-clock availability Software Development Considerations A software development team can move towards client/server by establishing and beginning to encourage adherence to a set of client/server modeled software, data access and user presentation design recommendations. These recommended design features, refined by experiences with pilot applications, can begin to incorporate OSF model concepts for RPCs, APIs, and other utilities of the DCE model. In addition, good modular design of software (for any application architecture) should incorporate divisions of programming code between: GUI or screen presentation; SQL statements; business rules; and the linkages between the screen I/O and business rules. Efforts should also be made to experiment with PC, Macintosh or workstation 'front-end' tools that use SQL to access host system databases. Consideration should be given to learning more about client/server success stories and the opportunities to partner, for example, with the efforts of Dartmouth College's DCIS or Cornell's Mandarin project. In considering new software tools: * Attempt to minimize the varieties of DBMS and GUI vendors until there is further progress on standards efforts. * Favor vendors or programming tools which support industry efforts towards standard protocols (e.g. SQL, OSF's DCE RDA s and APIs). Assess vendors' demonstrable commitment to standards and gateways with proprietary DBMS products. * DBMS system recoverability: look for working solutions to two- phase commits needs, or data rollback across heterogeneous databases * Query optimization: be cautious about the proposed database designs creating the distribution of databases that may need to be joined for query. The technologies of query optimization, still maturing within most single vendor DBMS tools, have yet to adequately address query optimization across multiple vendor DBMS. Overview of Two Successful Dartmouth College Client/Server Projects Dartmouth College's BlitzMail(R) electronic mail system. Placed into production in June 1988, BlitzMail was created to provide a GUI electronic mail system for a primarily Macintosh equipped user community. Currently serving 17,000 members or affiliates of the Dartmouth Community, this client/server based system is capable of 1000 simultaneous client connections and recently hit a new usage peak of 73,000 messages sent in one day. Connecting more than 7000 Macintosh-based clients, spanning over 200 AppleTalk LANs and a TCP/IP internet link connecting UNIX and XWindows clients as well, BlitzMail is a ubiquitous part of student, staff and faculty work at Dartmouth College. The GUI client software was written in Pascal, the server software was written in C. The server hardware currently employs five Next machines; we plan to install a DEC Alpha workstation as a sixth server in late December 1993. This single new server should provide the capacity to handle an additional 500 simultaneous users. Periodically, upgraded versions of the Macintosh client are distributed using BlitzMail itself. Such upgrades are also accessible for users to copy to their desktop Macintosh computer from a public AppleShare file server. In addition to a full complement of electronic mail features, BlitzMail allows arbitrary Macintosh documents (such as word processing or spreadsheet documents) to be sent along with mail messages as enclosures. BlitzMail also provides a Macintosh interface to bulletin boards containing "semi-official" information about groups and departments around the campus. BlitzMail acts as a client to a standard NNTP server which is part of our normal Usenet news system. Dartmouth College Information System (DCIS). Dartmouth College, with the support of Apple Computer, Inc., has developed an integrated campus-wide information system. DCIS is a client/server architected system which provides an average of 700 users per day with access to over 60 local databases and hundreds of off-campus Internet database resources. DCIS provides access to data from a variety of sources, including: reference encyclopedias; indexes to the Dartmouth College Library's collections and the journal literature; scholarly resources such as the Oxford English Dictionary, a library of commentaries on Dante's Divine Comedy; administrative systems data resources such as the central supplies inventory; and resources like Books in Print. The software architecture uses the OSI model of communications protocol stacks, incorporating the Z39.50 search & retrieval standard, WAIS protocols, and other locally developed protocols. DCIS databases are distributed across several mainframes and servers spanning multiple operating systems and database management systems. Written in C++ , the viewer (client) portion of DCIS, is distributed to thousands of users. DCIS has a self update capability and is also distributed via Dartmouth's blitzmail and from a AppleShare public file server. The DCIS system and tool set are available to be exported to other environments, including other academic institutions. The available products include: viewer application software; search and retrieval protocol software (a WAIS gateway, a Z39.50 gateway, and a Telnet connection server); servers for BRS, PAT and SPSS database managers; and an authentication server (IDAP). Conclusions The following conclusions can be reached about current client/server methods and technologies: * client/server is real, desirable and inevitable for many applications * it should be seen as a means to better meet user needs, not as a way to cut costs * for most applications it is neither simple, nor cheap * central administrative computing departments should provide leadership in realizing its benefits and optimizing an organizations chances of success with client/server * it is not a panacea and is not, for the foreseeable future, appropriate for all applications * it is a fairly immature, emerging technology full of risks and unresolved pitfalls. References & Bibliography Ambrosio, Johanna. "Client/server costs more than expected", Computerworld, October 18, 1993, p. 28. Anthes, Gary. "Middleware Eases App Development", Computerworld, December 29, 1992, p. 113 Anthes, Gary. "Training Biggest Obstacle in Client/server Move", Survey Says, Computerworld, December 14, 1992, p.89 Atre, Shaku and Storer, Peter M. "Client/server Tell-all: 10 things you should know before you tackle your downsizing project", Computerworld, January 18, 1993 Bond, Elaine. "Danger! There's Nobody to Steer Client/server", Computerworld, April 26, 1993, p. 33 Brentrup, Robert J. "Building a Campus Information Culture", CAUSE92 Conference Proceedings, December, 1992 Cafasso, Rosemary. "Client/Server Strategies Pervasive", Computerworld, January 25, 1993, p. 47 CauseASM listserv. Mediated Discussion of the Management Issues of Client/Server Computing, a report summarizing the June through October 1992 electronic listserv discussion involving computing representatives of 41 colleges or universities. Computerworld. Computerworld "Guide to Client/Server", April 26, 1993, pp. 73-84 Cornell Information Technologies. "Client/Server Applications at Cornell, Doing it Fast and Right", 1992 Finkelstein, Richard. "RDBMS for Client/server: They Don't Quite Measure Up", Computerworld, February 8, 1993, p. 59 Gantz, John. "Is Client/Server a lot of Hot Air?", Computerworld, December 21, 1992, p. 25 Gartner Group. "Strategic Planning Assumptions for Information Technology Management", summary of Cause92 conference presentation, December, 1992 Gillin, Paul. "Not Dead Yet", Computerworld, May 10, 1993, p. 30 Johnson, Maryfran. Computerworld Client/Server Journal, Nov. 1993, vol. 1, number 1, page 3. Kelly, David. "Training for the Client/Server Leap", Computerworld, May 3, 1993, p.129 Kennedy, Michael and Gurbaxani, Andrew. "When Client/Server Isn't Right", Computerworld, April 19, 1993, p. 99 Keuffel, Warren. "DCE: The Nations of Computing?", Database Programming & Design, Sept. 1993, p. 31-39 Kiernan,Casey. "Client/Server: Learning From History", Database Programming & Design, September 1993, pp. 46-53 Parker, Mike. "A (Relatively) Painless Way to Move to Client/server", Computerworld, June 28, 1993, p. 33 Roti, Steve. "Pitfalls of Client/Server", DBMS , August 1993, p. 67 Stodder, David. "Database: the Next Buzzword?", Database Programming & Design, July 1993, p. 7 Tanenbaum, Andrew S. Modern Operating Systems, (Section 10.2 The Client-Server Model)Prentis Hall, 1992, pp. 402-417 Tobin, Ed. "Client/server isn't the answer for everything", Computerworld, May 10, 1993, p. 31 Vizard, Michael. "Client/Server Caveats", Computerworld, February 15, 1993, p. 16 Wexler, Joanie. "Stanford Navigates Client/server Peaks, Valleys", Computerworld, June 29, 1992, p. 65 Winsberg, Paul. "Designing for Client/Server", Database Programming & Design, July 1993, p. 29 * It should be noted that the model of centralized systems can be consistent with goals of increasing departmental access to and control of data and systems resources.