Evolution of an Aging Information System at Yale Copyright 1991 CAUSE From _CAUSE/EFFECT_ Volume 14, Number 4, Winter 1991. Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, the CAUSE copyright and its dateappear, and notice is given that copying is by permission of CAUSE, the association for managing and using information resources in higher education. To disseminate otherwise, or to republish, requires written permission. For further information, contact CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301, 303-449-4430, e-mail info@CAUSE.colorado.edu EVOLUTION OF AN AGING INFORMATION SYSTEM AT YALE by Carol Woody ************************************************************************ Carol Woody manages a team of internal consultants supporting financial business systems for Yale University. She has also worked in banking, manufacturing, and retailing during her twenty years of experience in data processing. She holds a BS degree in mathematics from the College of William and Mary and an MBA with honors from Wake Forest University. She is a member of the Association for Systems Management and a charter member of the National Association of Female Executives, and is listed in Who's Who of Finance and Industry and Who's Who of American Women. ************************************************************************ ABSTRACT: This article presents Yale University's "constructive evolution" method of dealing with the problem of aging information systems. After a discussion of the reasons systems become outdated-- primary among them reactive maintenance practices--the author explains the process at Yale by which systems development personnel "partner" with systems users to ensure the constructive evolution of systems over time, with the concept of continual change and improvement as the foundation for the partnership. An example of the process is provided through a description of the constructive evolution of Yale's personnel/payroll system over a period of five years. Within minutes of its installation, an information system will require changes: information needs of those using the system change; legal requirements change; and, inevitably, requirements that were not identified in the development process surface during use. While these pressures for change are building, new technologies are emerging that make information retrieval easier and more affordable, often rendering existing systems obsolete. For small systems residing on a single platform used by a single department, this kind of change is manageable- -though never convenient. For large, critical systems which are often used across multiple platforms by many departments, change can be a nightmare at best and it gets worse the longer the outdated system is in place. Of special concern in colleges and universities are the administrative work-horse systems that process large volumes of data. These applications support the central repositories of information and satisfy the operational, functional, and analytical needs of the organization. They generate pay checks, pay vendors, solicit donations from alumni, bill students, and keep the books. Beyond the collection of computer programs that comprise the application are the extensive work processes and procedures needed to control input and distribute output. Many of the procedures are monitored by trained clerks acting as gatekeepers checking signatures and reviewing forms for compliance with policy. Installing these major systems in the first place was not simple, keeping them running properly is a full-time job, and changing them without jeopardizing ongoing processing is a real challenge. Reactive maintenance is the problem While a myriad of methodologies are available to address new development, too little creative attention is spent on maintenance. When a newly developed system moves into production, maintenance technicians take over and the developers move on to another project. Production systems in most environments too often move into the "if it ain't broke don't fix it" category. In a typical "reactive" maintenance environment, when changes become necessary, they are usually patched in as quickly as possible with an emphasis on keeping the jobs running. Too frequently, the programmers do not hear about changes until the moment they must be implemented, dictating a "quick and dirty" approach. Programs which have thus been reworked several times become "spaghetti" code, making further changes more difficult. If you have staff who really know the systems, such changes can nonetheless be made quickly and efficiently. However, as time passes this knowledge fades with staff turnover or when other development work draws away the more experienced staff. Maintenance programming has long been considered the backwater of data processing because of this crisis orientation. In reality, maintenance programmers are supporting the information lifeline of the organization. Their job is extremely complex. Unfortunately, these assignments are often drawn by the least competent staff person or the one with the least seniority. The maintenance backlog grows while the "rookie" struggles to make changes to code generated by sophisticated programmers. No amount of documentation can make up for skill, creativity, and experience. Rotation plans shifting staff between development and maintenance may add more expertise, but may also contribute to staff turnover because of the stigma attached to maintenance. If the traditional maintenance process worked well, we would not be hearing of the ever-rising level of resources required to maintain existing systems! Possible solutions for aging systems Eventually, systems maintained in this way become outdated and are no longer meeting the needs of their users. Management is forced to take some action, and four potential responses are available: rewrite the system, purchase a replacement package, retool the existing programs, or employ a "constructive evolution" process. Each option can be effective, but the best choice is one based on careful strategic planning and analysis of the system's degree of obsolescence, rather than the path of least resistance. Rewriting the system to fix all the shortcomings at once requires commitment of time and resources. To develop a replacement system, one must understand all of the workings of the existing system as well as the problems and unmet needs. The development cycle of the rewrite itself can take several years during which both the new and old systems must keep up with changing needs. The large programming staff required during the development effort is awkward to build. New hires needed for a massive coding effort may not have the skills needed for support in the long term, and reducing staff after the development phase can be difficult. Consultants can be used to fill the ranks, but their acquired knowledge leaves with them when their contract expires. Verification of the new system requires double work for the user. Unless substantial portions of the existing system are obsolete, rewriting the entire system is a heavy commitment to replace what still works. Purchasing a package will shorten the replacement timetable and staffing for a price. In this era of tightening budgets, the cost is becoming more difficult to justify if the existing system is at all functional. As with a rewrite, the up-front analytical understanding of the existing system is required to evaluate package options and redesign interfaces with other systems. The vendor frequently benefits because management is dazzled by the sales hype and unwilling to consider other options; unfortunately there is a vast difference between a demonstration and a working system. This must be viewed as betting the future on the expertise and survival ability of the vendor, and mistakes can be very expensive. Purchasers must keep in mind that they are paying for the use of an existing system which will have to change over time to keep current with their needs and those of other buyers. Vendor's priorities do not always coincide with those of the institution. Programming resources needed for maintenance after installation will diminish if the vendor provides effective ongoing support; enhancements and corrections can be available for a maintenance fee. Not all changes will apply to your institution, but to keep current for vendor support, they must be installed and tested. The speed and accuracy of support hinges on the quality of the vendors' technical staff and the communication skills of the on-site coordinator. If total support is assumed by in-house technicians, the problems are the same ones outlined for reactively maintaining the existing programs with one catch--dependency on the vendor's documentation. During package selection, the implementation group is faced with the decision of reworking the package to fit existing procedures or modifying the procedures to fit the package. Changing vendor code requires additional programming for each vendor maintenance change, which expands the ongoing cost. Changing clerical procedures may sound like a simple solution, but few user department managers are sufficiently experienced in the system conversion process to commit to the up-front planning required for a smooth transition. They are busy with the ongoing functions of their department. At installation time they blame the new system for everything that goes wrong, when many of the problems result from lack of training and failure to institute the necessary new procedures. As a result, the new system becomes the scapegoat. The trauma of changing existing--and sometimes effective-- procedures just to meet vendor specifications in a package can be very hard on an institution. The "vendor partnership" approach to system development is getting positive results at many institutions. The vendor agrees to a team development effort for new packages which the vendor will later market elsewhere. The application area must be sufficiently generic for the vendor to see profit from the involvement and the vendor must be willing to negotiate an attractive price for the institution to justify the resource commitment. The institution must also be prepared to act as a reference and demonstration model for the vendor's marketing efforts, which can consume staff resources. Retooling the programs is another possible solution to aging systems. The latest trend in system support is the source code analyzer. Such a program will evaluate source code and provide guidelines for restructuring to make it easier to maintain. The sales pitch is great, especially if the programs are old and bear no resemblance to the available documentation, but the price tag for a once-and-done operation is high. Each analyzer has restrictions and can generate more problems than it solves if not used wisely. Also, not all of the programs in a system contain problem code. Generally only 20 percent of the programs cause the majority of maintenance, further limiting the applicability of this approach. Reverse engineering is another emerging tool set which has had positive results in very structured environments. Sophisticated software can convert the code into a design structure for recoding or retooling using CASE facilities. The use of CASE to retool requires institution commitment to a specific language and database structure for a code generation process to be complete. Even just using the process to redevelop the design from the existing code would be helpful, but the CASE facilities we have worked with require a lengthy learning curve to make them useful. With the relatively high cost of the software added to this learning time, most institutions are not ready to make a commitment of this level for reworking existing systems unless they feel the tools will also work for much of the planned development. Nor are they ready to make such a commitment to a still emerging technology. Our solution--constructive evolution What do you do when ... * the existing system works but the maintenance needs are high; * critical changes must be made immediately, and priority conflicts will arise with key groups whose needs will be delayed; * the application is highly tailored to the institution and no packages fit; and * money is not available for package purchase or a rewrite? At Yale University, we implemented a plan for what we call "constructive evolution." Evolution by itself implies some form of selection. Constructive evolution is a directed selection process with an agreed-upon goal. The key ingredient of the approach is teamwork with the user departments. This team process must include sharing of decision-making, design efforts, and responsibility. Instead of a partnership with an outside vendor, partnerships are formed with the application user departments. Participants must be decision-makers from their respective organizations. The data processing group provides high-level analyst support and project management skills. The users provide application knowledge and direction. Team management skills are a necessity-- effective teamwork does not just happen. Technicians frequently underestimate the knowledge of system users, especially if the user groups have programming units of their own, which creates an atmosphere of competition instead of cooperation. A group leader must be selected to provide leadership and maintain cohesion. The evolution concept is based on the premise that change, over the long term, is predictable. Resource levels and system design can be strategically planned to accommodate the expected level of change. The users know of many change requirements long before the technicians hear about them. Involving users in the planning process provides access to this information. Making users partners, jointly responsible with the data processing staff, generates the needed commitment. Major applications are usually shared by several departments. Though they may be located next to each other, they frequently do not communicate well. Program maintenance for one department may have a drastic effect on another which is not apparent until new code is installed, creating a ripple effect of crisis changes. Including representatives from all affected groups on the team provides a forum for dialogue; frequently procedural changes can be implemented eliminating a programming need if the right atmosphere and incentives are in place. The team becomes a sounding board for ideas that will improve the whole process. The maintenance curve can be controlled, but it requires the same type of management needed in a new development project--skilled technicians, strong analytical input, and management commitment from both data processing and user departments. At Yale, we have found it best to enforce a limit of one person from each participating group, ensuring a single responsibility point and building a strong systems knowledge within each work unit. Also, a smaller group helps to keep discussion focused. Dialogue among the participants must be open and extend beyond formal meetings. When the user department participants control the funding, the team approach enables them to tie their funding commitments to the work as it is being requested; discrepancies between desired and actual surface quickly. Initially, more maintenance is needed to catch up on the backlog. Later the emphasis transitions to control of data and tuning of processes, allowing the team to manage the response to change. Over the long haul, a solid and well supported existing system becomes an information center for new development efforts, resulting in a blending of maintenance and new development. The programming team which supports the evolution must also be a blend of strong analytical skills, knowledge of the present system, and designing ability to provide the needed skills as the system evolves. Using the constructive evolution approach, the existing system is progressively modified to meet changing needs and to make the code easier to change in the future. Immediate needs are blended with long- term requirements in a strategic approach. Team members are charged with constantly monitoring their areas for ways the system can become more effective. All ideas are presented to the team for review and refinement. Selected changes to be installed are analyzed to break them into smaller pieces whenever possible. Small units are more easily verified and limit production exposure. Emphasis is first placed on simplicity instead of processing efficiency. Research has shown this produces fewer errors in the code. Efficiency is added based on shown need, not programmer theory. This "constant change" philosophy goes against the traditional view of system projects, but it is easier for user management to understand and verify. Constant change forces the user departments to develop a notification process internally. The "partners" determine the work priority and sequence based on negotiations. Resources must be committed so participants are assured work will get done. The speed at which their requested changes are implemented depends on their creativity and involvement in the negotiation process and their accuracy in verifying installed changes. For extremely volatile systems, the negotiation process may be weekly. More commonly, the team meets monthly to adjust priorities and monitor progress. The evolution process can be segmented into five phases: 1. Developing a long-range plan. 2. Reworking high-profile programs and repetitive maintenance tasks to minimize resource consumption. 3. Developing standard gateway points for automated input and output to other new and existing systems. 4. Tuning the system to have information where it is needed when it is needed. 5. Documenting the system. These are not distinct phases, since work from one phase can be blended with changes in another, but they provide direction for the planning process. There may be later phases beyond these that we have not yet identified, as our systems are still evolving. Long-range plan Planning must be the first step. The primary objective of the planning process is to generate dialogue among the team members to educate them on each other's needs and share ideas. As a starting point, we use the current maintenance backlog and identify unfulfilled requirements. Team leadership and knowledge of the existing system are critical at this stage to ensure that real needs, not perceived ones, make up the workload. Shop standards that may not be implemented in an older system should be included to make it easier to rotate development staff in the future. System software and hardware changes need to be considered also. Because change never ends, this is an open-ended process that continues for the life of the application. Establishing good dialogue up front will enhance the success of effective planning as time passes. Requirements should be viewed in several ways. If the system logically divides into functional processes, an assessment of changes by process will alert participants to the need for a different approach. An assessment of need for change by program will flag the high-profile 20 percent that should probably be rewritten. Dialogue about each change will generate ideas that can point to a better way of approaching the problem. Assessment of the changes based on technical skill requirements will alert planners to bottlenecks where special skills are in scarce supply. The ultimate product is an agreement by all the participants of how the system will be managed based on a complete understanding of the needs of the institution. The plan should be completely reevaluated each year since participants and management direction also change. From our experience, the initial development of the plan takes several weeks and the annual review consumes a few days. The need for and timing of other phases will depend on the plan and, in many cases, the age and functionality of the existing system. Reworking high-profile programs Certain types of processes must be redesigned completely if they require repetitive changes which consume resources. Year-end processes (fiscal and calendar) are typical change points and most frequently the programs can be redesigned to handle adjustments automatically. Large volatile programs must be reorganized and internally structured to make change points easier to locate. An overall system perspective and analytical skills are critical at this stage. Typical system maintenance staff can describe how a program works, but they cannot see beyond the present code to redesign the way the job is done. These individuals have typically worked independently with little guidance and impossible deadlines. Strong leadership skills are needed to get them involved in the team process. Our approach has been to develop each change design in a group session with the analyst, team leader, and participating programmers. A change list is developed along with an estimate and a timetable. The programmers are responsible for producing the desired results and reporting progress at established intervals. It takes a year of functioning under the new direction to retrain the thinking process away from quick fixes. To keep programmers from reverting to old habits, their performance evaluations must include an assessment of how well they implement changes that minimize maintenance. Each time a program is opened for change, the technicians must do more than just make the fix. If programs have hardcoded values, these should be moved to external tables. Field sizes that are too small should be expanded. Overlapping data fields may conserve disk space, but the savings may not be worth the confusion when a programmer unfamiliar with the system is trying to maintain it. Date handling routines need to take into account the fact that a new century is approaching. (Some are not even prepared to handle a new decade!) Obsolete code should be removed instead of bypassed so future technicians can see what the program is really doing. Unused programs should be eliminated since it is easier to write from scratch than to rework existing code. Extraneous programs also clutter the evaluation process by making the volume seem greater for system-wide changes such as database expansions. Clever code may work well for a while, but can be impossible for another technician to decipher, so such programs should be rewritten. A written list of guidelines agreed to by the team members is useful. To accommodate these needs in the plan, we identify them and rank the impact as local (this program only), functional (isolated to a related group of programs), or system-wide. Local changes are handled with each change touching the program. Functional and system-wide needs are merged into the plan. If the entire process is to be reworked, the need frequently disappears. All technicians working on the system must be committed to the long-term direction of the plan. Critical deadlines may shorten what can be done at a specific time, but the needs can be identified and made with future changes. Estimating assignments and workload planning is a must for technicians working in this environment. When cleanup work will affect required timetables, they must be able to alert team members and renegotiate the changes. Standard gateways If the system resides on a platform that does not coincide with the standard platform for the shop, or uses a database structure which is not readily tapped from outside the existing system, the development of standardized input and output routines should be considered. If these data gateways are available to feed to or extract from the existing system, then front-end and back-end systems can be independently developed to support new requirements. To avoid constant file comparisons, the ability to verify data at the creation point is critical. Otherwise, the gateway will become a bottleneck while users of the receiving process first clean up the data. Shadow files refreshed at specific intervals have been successful in providing data across platforms for verification. If the database is too large to copy, key subsets of verifying information are extracted. Subsystems on varying platforms can save hours of development time by allowing interfaces with new technology to extend the capabilities of the existing processes. This goes against the data processing orientation which abhors duplication, but it allows use of new tools without rewriting all the code that still works. Gateways can become support cross-over points between central and department programming staffs. Fixed definitions of input and output,as well as the edit rules, minimize confusion. System tuning It can be cost prohibitive to keep available at all times the volume of information moving into and out of a large system. How much data and how fast they will be needed becomes another negotation item for the team. History requirements must also be included, and these change over time with legal and business needs. The lag time between when the data are created and when a recipient calls with a problem must also be evaluated. Many existing system users are service departments who must respond to questions from their users. Requiring them to define an acceptable service level to their customers helps them to identify typical questions, define the data needed to research the question, and define acceptable search times. This assessment is needed to provide guidelines for evaluating the effectiveness of the existing system. Constructive evolution also creates subtle changes in backup requirements and off-site data needs. Participants must be on the watch for holes which could cause recovery problems. Documentation One of the hazards of the fast pace in maintenance is the lack of time to maintain effective system documentation. Our experience has always been that the typical documentation is a set of tomes containing all the combined wisdom of the developers on the system--great history books, but rarely reflective of the realities of the system, even at the time of installation. Even in shops with stringent documentation standards there never seems to be time to keep the documentation well maintained. For documentation to be meaningful, it must be easily accessed, heavily used, and easily maintained. Support for documentation is not readily recognized by user departments who see other needs as having higher priority. Demanding blind adherence to "shop standards" defeats the purpose of the team effort. Participants must all agree that it is needed. As a start, we require that each applied program change be descriptively documented. This means telling why something is being done as well as what is being done. The information builds over time. There is never a convenient time to stop and document, which is why it never happens. Available documentation will determine how easily staff can move into the team effort, and striking a balance becomes another part of the negotiation process. Documentation is enhanced when staff is rotated within the programming areas. For speed of change, programmers tend to own code. They know they will be the next person looking at it and only write down cryptic messages to themselves. I've read lots of theory about egoless programming and team policing, but I have yet to see it work. Instead, we take advantage of peer pressure by rotating support for each major process on a regular basis. Programmers who know that their teammates will be trying to decipher what they have done are much more thorough at describing the process. As a by-product, staff are cross-trained to cover for absences and departures. User documentation is an entirely different matter. Distributed information is rarely in the right hands and is usually out of date. Interactive processes must be reworked to provide visible instructions and online help facilities. This will assist user departments in dealing with their staff turnover problems also. A Yale Example Yale's payroll/personnel system provides a good example of a system that required a very complex "constructive evolution" plan. The promises made in a union contract in 1984 to deliver a new pay cycle and attendance reporting structure in six months provided the motivation for initiating the process. Payment and benefit reporting for over 26,000 employees is handled by the system, which dispenses $300 million annually under four union contracts and four payment cycles. The set of programs that pays faculty, staff, hourly employees, and students was written in 1976, using then "state of the art" technology. Yale is a decentralized structure with each of over 400 departments making the pay decisions for their staff, which can include casual employees brought in for a special need, students hired on a work assistanceship, and tenured faculty. The business office for each department sends the appropriate pay information into the payroll office for processing and all checks are created centrally. Five years ago, volume was burying the central Payroll and Human Resources offices. Central personnel spent hours verifying forms for accuracy and completeness, then batching them to be keyed and processed overnight. Data entry costs were high and follow-up corrections created more pressure in an already tight processing cycle. All information except an inquiry of basic payment data had to be accessed on paper; researching questions could take weeks. There was a two-year maintenance backlog and user offices were handling manually those tasks for which functionality in the system was unavailable. Programmers were busy just keeping the programs running. Evaluation of the system led to the decision that a commercial solution would be inappropriate; there were so many specialized processes unique to Yale that available packages would have had to be rewritten, and the estimated cost was prohibitive. There was not sufficient time for a complete system rewrite and, besides, the existing programs were efficient and inexpensive to run. Simply "patching" the programs would only delay the needed work further. New pay documents threatened to stretch the Payroll staff beyond their ability to meet required deadlines even with extensive overtime. A major internal effort was the only feasible solution. A planning group composed of responsible staff from Payroll, Human Resources, Comptroller's office, Auditing, and Data Processing was established to define the sequence of modifications and coordinate the accompanying procedural changes. The plan had to allow us to meet the critical deadline of the union contract without jeopardizing the ongoing processing. As dialogue began, it became apparent there were many unmet needs not even on the known backlog list. The system was divided into logical blocks where changes could be made with minimum impact on other blocks. Priorities were established and work began. The team met every week to figure out ways to handle each new crisis and still keep the system changes moving. Quick and early changes Top developers from the MIS staff were assigned to the system and the 20 percent problem programs were quickly identified and reworked. Contract changes were applied to the restructured programs in a third of the time originally estimated for only handling the contract changes with the original code. Two more inquiry databases were built using a fourth-generation language, and Human Resources took over writing and maintaining all benefit reporting, much of which is ad hoc. This dropped several more months off the backlog. File expansion needs were held off until after the contract changes were in place by doubling up some fields temporarily. The credibility of the existing system was confirmed when Yale decided to bring back the pension payroll from an outside contractor because the in-house system had much more functionality and better support. Distributing the input process Payroll found these new inquiry databases could answer over 80 percent of their problem calls. With the addition of an online entry facility for corrections, most of their work was automated--but it took over a year to convince the clerks that the terminals could do a better job than the pencils! To eliminate the duplication of effort between the distributed departments and central office, the input process had to be distributed. Creating the programs to receive the input was only the first step. Providing training and ongoing support threatened to drain all of our developer resources thereafter, but when we found that most of the questions involved policy and procedures rather than processing mechanics, we realized that the user department could take over this job. Over time, this function has grown into a key part of the process. In planning for the data to flow directly from the distributed department into the system, control and security issues became the top concerns of the central participants. Because of this, we ended up writing the input systems twice. The first time satisfied the central concerns for security, but proved too hard to support. Controls were so tight, less sophisticated computer users in the departments were constantly getting confused and creating problems. The rewrite provided a more flexible input process using audit trail tracking after the fact instead of a lockout process up front. Expert systems technology applied The first programs with distributed input took over the reviewing and approving functions that central departments had been performing. To automate the remaining functions, a facility was needed that could translate descriptive information into the system codes and generate other information added centrally. Based on some preliminary analysis, we determined that available tools could create the process, but because these functions are very volatile, maintenance would be very high. The new technology of expert systems was selected as it promised to be both effective and easily maintained. The creation of the rules was a slow process since correlation tables had to be developed between descriptive data and system codes. We did not just want to automate the manual process which was out of date and cumbersome. Each step had to be researched and evaluated to define the rules for editing data. Input screens were tailored to only ask questions based on the process being performed and on the answers given to previous questions. This reduced the potential for error and eliminated the central completeness checking process. Data are electronically edited at the beginning of the process instead of manually along the route. The savings of clerical time and the elimination of payment delays due to incomplete information are huge with over 50,000 of these forms being processed annually. The cost of the paper process is estimated at $10 per form; electronically it typically is $1.50. The existing update process has been reworked to provide after-the- fact reporting, and output from the expert system feeds directly into the existing processes. The user department "partners" developed the rules and worked out the political hurdles. Data processing concentrated on learning the new technology. It took over a year to make all the pieces work and tune the process into a functional unit which could be installed. There was much grumbling from the departments using the process because they could fill out forms faster. We learned that all system changes must deal with parochial interests, and there are still political hurdles to overcome. Certain types of changes still must be done on paper, but the number is shrinking with team effort. Current Status The team has been together five years. The system has moved from a completely paper-driven structure to a sophisticated blend of old and new processes. Around 35 percent of all transactions are processed online, and the systems are in use in 93 departments around the University. The original programs are still in action. Standardized interfaces have allowed us to develop a system that spans all levels of technology, from assembler programs to fifth-generation expert systems, in a loosely coupled structure. Many of the feeder systems were written and supported by staff in Human Resources who were trained in fourth- generation languages. The planning group still meets each week to control the present needs and future growth for the enhanced system. The system is now being tuned to accommodate a shrinking clerical staff in the user departments, and more manual processes are being automated. The system is viewed as an information repository because of the control and care taken with the data it contains. It is a source of data for other systems with expanded use of the standardized gateways, because of the ease with which information can be accessed and maintained by distributed units. Conclusions With the right mix of skills, a constructive evolution process can be effective in keeping an existing system current with changing user needs and technology. The road is not always smooth, but our results at Yale have been effective. A partnership with system users provides the commitment to a long-range direction. As the partners develop the plan, they build a base of system and institution knowledge which allows them to blend current and future requirements and effectively negotiate priorities. Users must be committed to making solutions work instead of functioning as armchair critics for data processing efforts. The teamwork allows all available funds to be pooled effectively by those who share the data needs to develop the most cost effective solutions to common problems. Constructive evolution is a management approach, not a technical fantasy. Staffing for evolution requires a perspective different from system maintenance. The technicians must be more than coders. They must understand the planned future for the system to determine what should change and what should be left alone. In many cases they are building bridges between old and new technology within a system, requiring knowledge of a wide variety of platforms and products. Staffing continuity is important to keep the system running while segments of the planned process are being added. As old technology is replaced, the maintenance requirements give way to a need for creative design and integration. Maintenance does not need to be a career death-trap. When reactive maintenance becomes constructive evolution, it can be an excellent training and experimentation ground for new technology. Exposure to the problems of existing systems will assist designers in developing new systems which are simpler to maintain since these will become the existing systems of tomorrow. ************************************************************************ For further reading: Martin, J. Software Maintenance: the problem and its solutions. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1984. Miller, H. "Creating an Evolutionary Software System: A Case Study." Journal of Systems Management, August 1990, pp. 11-18. Best, L. "Clearing the dusty decks." Computerworld, 26 March 1990, pp. 97-100. Barnes, G., and B. Decherd. "Don't chase problems; control them." Computerworld, 25 March 1991, pp. 65-66. ************************************************************************ EVOLUTION OF A SYSTEM 1984 October Union contract negotiations propose payroll changes 1985 April Union contract signed mandating payroll changes June Constructive evolution plan finalized July Rewrite of critical 20 percent (problem programs) October Changes mandated by union contract installed December New online files for Human Resources 1986 March Masterfile expansion July New payroll timesheets online (security shell) December Pensioners paid from Yale payroll 1987 February IRS filing on tape April Casual earnings online (security shell) November Operating system conversion 1988 February Pension tax filing added July Reel tape to cartridge conversion; online check stub for payroll August Online check history for payroll September Automated feed of graduate student financial aid 1989 February W2 updating online and printing duplicate forms June Dummy Social Security assignment for non-U.S. citizens August Assembler code removed in masterfile update process October Payroll special payments automated 1990 February New tax reporting for non-resident aliens March Online payroll corrections April Bank direct deposit option added July Rewrite of new payroll time sheets for flexibility September Masterfile expansion-required for online profile October Online profile installed (expert technology project) November Payroll checkadjustments online 1991 March Rewrite of casual payroll for flexibility September New state income tax ************************************************************************