Experiences Implementing a Decision Support System (DSS) Within a Management Plan |-------------------------------------| | Paper presented at CAUSE92 | | December 1-4, 1992, Dallas, Texas | |-------------------------------------| Experiences Implementing a Decision Support System (DSS) Within a Management Plan Gerald P. Weitz Director, Information Systems Stanford University School of Medicine Stanford, California ABSTRACT In 1987, Stanford University School of Medicine, facing a period of rapid growth and increased complexity in the business of academic medicine, begun implementing a strategic plan for management of the School that called for decentralizing programmatic decisions and responsibilities to the academic departments of the School. The School management realized that enhancing the competency, skills and motivation of department business managers would be essential to the success of the plan, and, in addition, a sophisticated decision support system world be a necessary tool in the managers' arsenal. After a brief introduction to the plan, the paper discusses three major aspects of the implementation of the decision support system: technical implementation, data quality issues, and integration of the technology into the workplace. Finally, it lays out current and future directions as influenced by the successes and failures of the projects. ------------------------------------------------------------- In the mid 1980's, Stanford University School of Medicine found itself facing a decade of rapid growth and change. The planned growth in facilities would take the school from 600K sq. ft. to 1000K sq. ft. New academic programs were planned in molecular and genetic medicine, and the School was revitalizing its clinical programs. The nature of clinical practice was changing due to increased market pressure from health maintenance organizations and changes in government medical insurance. Competition for federal research dollars was increasing rapidly. Under the pressure of these forces, the management of the School realized it could not manage this magnitude of change centrally so it embarked on a strategy of decentralizing authority and control to departments within the School. Within the administration, the role of the academic department business manager was key to accomplishing this decentralization, so the School embarked on a program to bolster the skills and abilities of the business managers. This program had four elements: * Find and retain the best business managers * Develop their management skills * Reward their accomplishments * Provide them the information tools necessary to their success. This led us in 1987 to invest in the development of a decision support system. Selecting a System Working with a committee of business managers, school administrators, and University advisors, we developed a set of criteria for the decision support system. --An industry standard relational database with SQL --A GUI workstation with an integrated analytical toolset, including query, spreadsheet, text, statistics, and plot --Potential for user developed models --Easy method for sharing models --Fits our strategic architecture (Ethernet, TCP/IP, open, client/server) We spent nearly a year developing an RFP, evaluating vendors, including bench testing a couple of products. We eventually selected the Metaphor Data Interpretation System[1] (DIS) from Metaphor Computer Systems, Inc. The Metaphor DIS was later licensed by IBM. Our system now contains the following components: --A Sybase RDBMS running on a SUN 4/470 with 2GB Disk --Ethernet with XNS LAN and TCP/IP WAN --Metaphor/Sybase Gateway running on an IBM RS6000 --Three IBM PS/2 Model 80 fileservers --20 active IBM PS/2 workstations running Metaphor Figure 1. below, shows the hardware configuration. Figure 1. Current Hardware and Network Configuration FIGURE NOT AVAILABLE IN ASCII TEXT VERSION Data, Data, Data Once we had installed the system, our first task was to fill the server with data. We viewed each faculty member as a "center of enterprise." This organizing principle for our database schema would allow us to precisely model the economics of the Medical School so that we could measure the impact of specific programmatic decisions. We began the process of locating, understanding, cleaning-up, re-organizing, pre-aggregating, integrating, loading, and verifying data. This was not an easy task. Our data sources included the University IBM mainframe applications, a commercial clinical billing system, and two different hospital systems, one for Stanford University Hospital and one for Lucile Packard Children's Hospital at Stanford. Data quality issues were abundant and difficult to solve. Our operational systems were not designed with decision support systems in mind. There was a lack of data integration across operational systems. For example, the payroll system required a percentage effort for each pay- line, but that information was not passed in a usable way to the accounting system so it was difficult to associate salary dollars with effort. Our central systems had been developed by vertical lines of business and there was no overarching view of the information as a whole. A large part of our task was to integrate this data outside the application. We eventually developed a "Metalink" table containing all the identifiers for a faculty member. This included the social security number, employee ID, doctor numbers in the Faculty Practice, and the two hospitals, California medical license number, PI number for research, and space identifier. This table is the hub that ties together information from disparate systems. Figure 2. below illustrates the difference in the way data is viewed for decision support vs. operations. Each central office is responsible for a vertical slice of the data and tends not to take responsibility outside its domain. Data needs for decision support cut across the operational organizations as indicated by the X's in the vertical domains. Figure 2. Decision Support vs. Operational View of Data FIGURE NOT AVAILABLE IN ASCII TEXT VERSION Key identifiers were missing from the operational systems, that is, some of the important data did not have faculty identifiers. The University space inventory could attribute space to a department but not to faculty member. We had to modify the system so the School could maintain space data by faculty. Our financial systems did not attribute funds or expense accounts to faculty. We are approaching this problem by establishing separate attribute tables that allow us to associate values, like faculty identifier, with base tables. There was a lack of clarity and consistency in the definition and usage of data in the operational systems. Seemingly simple questions such as "who are all the faculty?" became devilish as you had to consider all the different types of academic staff who might be acting in a faculty role as well as the purpose for which the question was asked. A department asking that question might include visiting faculty, voluntary clinical faculty, and instructors while the faculty affairs office might exclude all but tenured faculty. We found numerous instances where data and codes that had originally been intended for one purpose, fell into disuse or were used for one purpose in one organization and another purpose in another organization. Finally, data and data definitions changed over time. This was caused not only by changes in operational systems, but also by changes in organization and changes in usage of data. For example, when we started the project we had 13 clinical departments and six basic science departments. We have added three new basic science departments, Radiology split into two departments, Surgery split into three departments and its Thoracic Surgery division combined with Cardiovascular Surgery to become Cardiothoracic Surgery. We now have nine basic science 16 clinical departments. All of these problems took a great deal of effort to resolve and continue to require much attention in our periodic download of data. Each month we must perform referential integrity checks against all the data we are downloading. Our DBA spends about 40% of his time dealing with routine downloading of data. Formulating the data requirements was another major problem we encountered in building our database. While we had a good organizing principle for the schema, we were less clear on the specific meaning and usage of the underlying data. We found that high-level planners sometimes would make gross assumptions about the data that would lead to erroneous conclusions. For example, they wanted to measure a faculty member's "hit rate" for sponsored research, that is, the ratio of proposals to awards. On the surface this sounds simple because we do track proposals and awards by faculty. However, if you look at the data it becomes very misleading. Program project grants may have many investigators participating in the project but only one is the principle investigator and, under our operational system, that person would be credited with the full amount of the award. Proposals may go through several iterations so that what started as a million dollar proposal ends up as a proposal for half that amount and is awarded at less than that. Which number is your measure? We also had difficulty with the scale-ability of analyses, that is, an analysis that is useful at the School level might break down if carried to the department or faculty level. A good example of this was an analysis of the ratio of research expenditures to research space. At the school level, this would yield useful information for facilities planning. At the department level, this would begin to break down because cross-department projects would attribute space and research expenses to different departments. At the faculty member level, the information was almost useless because of the lack of integrity of the faculty level data. We even had difficulty formulating useful questions. In the example above of measuring faculty hit rates for sponsored research, we finally decided that even if we could formulate a meaningful answer to the question, the question itself was not useful. It was neither a good measure of productivity nor did it yield a good predictor of future award volume. We dropped the question and continue to struggle to find meaningful predictors and measures in this area. As the planners gained experience with the data and analyses, this became less of a problem. Departmental business analysts, on the other hand, understood the operational data well, but they wanted the decision support system deal with operational issues rather than planning and analysis. This kept driving us to capture more and more detail information rather than putting our emphasis on higher level aggregations to support planning. Integration of Decision Support into the Workplace Our approach to implementation was to establish a project team led by the Planning Office and with membership from the Information Systems Group and four academic departments. Other offices, such as Finance or the Faculty Practice Program (FPP) would be called on as needed. We were to build the system on a subject area basis, starting with facilities and moving through FPP, finance, faculty appointments and so on. For each area, we would specify, design and implement the appropriate relational tables and associated applications. The planners and departmental analysts would use the applications for generic reporting and would generate ad hoc analyses to solve particular problems, and, based on usage, we would iterate on the database schema. Our rationale for this approach was that we needed the experiences of both School and department analysts to be able understand the proper schema for the database and the appropriate applications. We felt that just as the School had a different view of data than the central University, the departments would have a different view than the School. What we hoped for was a simultaneous top-down and bottom-up development. The first problem we ran into was that the departments had a much different view of the data than the School. Although we anticipated this, we did not fully understand the magnitude of the difference. The project was christened "Management, Analysis and Planning System" (MAPS) and we felt our charter was to broadly support the planning functions in the School and departments. But the departments focused on operational analysis; they were so busy fighting day-to-day battles they had no time for planning. They wanted information at the detail transaction level while we were interested in building aggregations for higher-level planning and analysis. If the system could not help with the daily problems that they faced, they were not interested in participating in the project. This exacerbated our second problem; learning and retaining the software tools and databases. While most of the software tools were reasonably familiar, it required extensive training to learn how to use the tools as an integrated package so that you could build models, and one needed to use the tools frequently to retain that knowledge. Additionally, the data was extremely complex and, even though we had a visual SQL query tool, it required a high level of expertise to successfully create a query. The departments never got to the point where they could create their own queries, much less build models. In fact, only a few users in the central School offices developed any expertise; most of the models were constructed by the analysts in the Planning Office. We were also plagued by turnover. In the four years of the project, we had four different project managers and extensive periods with no project manager. Every user that we trained either left the School or changed departments. Eventually, we pulled back and developed the system from a School perspective, leaving the department view for later. We were successful in building many applications for the School and in performing several important ad hoc analyses. We currently use the system for our quarterly financial report, our annual "fact book," a funds analysis report, a faculty salary tracking report, chairman search packages, and several other important applications. We have performed many ad hoc analyses including the impact of an early retirement plan for the faculty, FTE levels in the School, graduate student stipends, and many others. Current and Future Directions While we have not yet cracked the problem of supporting the departments, we have learned from our experiences and this has helped shape our plans for the future. We specifically learned the following: --Planning is an infrequent activity; for departments to be proficient planning tools and data must be integrated into their daily operations. --Having tools and data is not sufficient to get departments to do planning, they must have time and impetus to plan. --The pump must be primed with fully implemented planning applications so that planning can be started without the need for every department to engage in the intellectual exercise of developing a planning methodology. Armed with these insights, we have started several projects that we hope will help us reach our goal of having each department able to develop and follow comprehensive plans for the future. The first project is to continue to move DSS out to departments, but at a slower pace and with individual mentoring of each department analyst. As we work with the analyst we will focus on current problems in the department. If we require additional data we will develop means of capturing that data through local entry in the department or extraction from central systems. The second project is to develop a Grants Management system. This system will use the data that has already been downloaded from the University systems, including purchase requisitions, expenditures, sponsored projects and personnel along with local data about commitments. The local data will be entered by the department using a Windows or Macintosh client application and stored on our Sybase database server. Reporting will be done through Metaphor giving the departments a flexible set of standard reports and the ability to create their own reports using the DSS tools. This project will address two of the problems we found. First, it should free up time for the business managers as they are currently spending considerable effort in manually managing their grants. Second, it should get them familiar with using Metaphor on a daily basis. Our third project is to develop a Department Resource Planning Model in Metaphor. This model is based on a series of Faculty Profiles; one for each faculty member in each department. The Faculty Profile is a multi-page spreadsheet containing three years of history and five years of projection on a faculty member's activities. The historical data is from our database and covers compensation, operating budget support, FPP income and activities, sponsored research income and activities, space usage and other funding. The business managers manipulate a set of decision variables to project the future financial and other resource usage for the faculty member. The spreadsheets are loaded into a database and summed over a department to give a projection of the total activity for that department. This application will not only help the business managers to plan, it will also teach them about the relations between various components of a faculty member's activities. Conclusions There are three important conclusions that we have drawn from our experiences in implementing a DSS: --Decision Support Systems require that data and processes be viewed from a corporate rather than line organization perspective. This perspective will require integration, consistency, and accuracy of data across all lines of business. --The cultural changes required to implement a DSS are significant and take time to overcome. It takes years of exposure to a technology like DSS for it to work its way into the culture of an organization. --A strong business need for decision support and a strategic business plan are essential to the success of implementing a DSS. Without these, you won't know where you are going and will not have the means to steer. NOTE [1] See Patricia B. Seybold, "Metaphor Computer Systems: A Quiet Revolution," Patricia Seybold's Office Computing Report, August 1988 for a description of the Metaphor Computer System.