This paper was presented at CUMREC '99, The College and University Information Services Conference. It is the intellectual property of the author(s). Permission to print out copies of this paper is granted provided that the copies are not made or distributed for commercial advantage and that the title and authors of the paper appear on the copies. To copy or disseminate otherwise, or to republish in any form, print or electronic, requires written permission from the authors.

Abstract
Storing Degree Audit Data

In early versions of the degree audit system at the University of Texas at Austin (1985-1993), audits existed exclusively in a text report format, summarizing the progress of individual students toward their degrees. Eventually, due to an occasional need to reprint these reports, an attempt was made to store audits, still in a purely text format. As demands on the system increased, among them requests for online student access to audits, it was found preferable to store audit information as formatted data on a mainframe database. This data storage approach, though giving rise to an increase in space and programming requirements, had the advantage of separating the evaluation and presentation functions of the degree audit system, thereby facilitating display of the data in a great variety of formats and on a number of different platforms, including the World Wide Web.


Storing Degree Audit Data

Mark Long, Senior Systems Analyst
Brent Heustess, Systems Analyst

Office of the Registrar
The University of Texas at Austin
Austin, Texas

Introduction

One function of student records information technology (Data Coordination) at the University of Texas at Austin is to provide students and advisers with information about degree audits. A degree audit is a formal evaluation of a student's coursework to date, in terms of the requirements for a particular degree. Such an evaluation is of great importance as a final check of requirements when students approach completion of a degree. Where resources (including computer resources) permit, however, it is also possible and desirable to produce audits at frequent intervals, as a part of the counseling of students by advisers and as a part of students' self-advising as well. Degree audits allow both students and advisers to determine which degree requirements have been completed and which must still be satisfied, thus facilitating informed decisions about the choice of future courses.

The present paper deals with three different strategies for presenting such information. These are:

� dynamic generation of reports or displays, as audit results are being determined (the no-storage option);

� storage of degree audit text, for later retrieval and display (the text storage option);

� storage of encoded degree audit data, for later retrieval, formatting, and display (the data storage option).

We will begin with a brief history of automated degree audits at the University of Texas at Austin, describing how each of these strategies has been used at different times. We will then look more closely at our present strategy, the data storage option, and detail the file structures involved. We will demonstrate the ways in which these structures are exploited for print, mainframe, and web presentation of degree audits using IDA (Interactive Degree Audit), a delivery vehicle for degree audit data. In conclusion, we will offer a cost/benefit comparison of the three strategies and show our reasons for preferring the data storage option.

I. Short history of degree audits at UT Austin

UT Austin has had an automated degree audit system since 1985. This system currently runs on an Amdahl Millenium 785 mainframe running the OS/390 (revision 2.5) operating system. The degree audit system is written in Software AG�s NATURAL programming language and uses their ADABAS database.

The automated degree audit system established at UT Austin is a linear system. Degree audits make detailed comparisons between coded degree requirements (rules) and individual students' coursework. Rules are processed one after another in a set order, determined by user-assigned rule numbers (001-999). As each rule is processed, the results for that particular rule (satisfied vs. unsatisfied, hours counted vs. hours lacking, specific courses counted or set aside, etc.) are determined. A sample rule assessment is shown in Figure 1.

Figure 1

Figure 1. Sample detail rule display

During the early, experimental phase of the degree audit system (1985- 1989), output consisted of little more than a string of such assessments, on a printed report, supplemented by a list of each student's coursework and some explanatory material in the header of the print form. These audit reports were not stored electronically in any manner. Only the input information for each audit (courses, rules, and assorted tables) existed in the form of encoded data. If a fresh copy of an individual audit was needed, audit results had to be completely recalculated and reprinted. No online access (for students or for advisers) was contemplated. This was, in a word, the no-storage option.

At this stage, because of the resource-intensive nature of the processing involved, degree audits could only be run overnight in batch mode. In the event of a printer malfunction or misrouting of output, creation and delivery of audit reports could be delayed for 24 hours or more. This problem was alleviated, to some extent, by allowing a limited number of individual audits to be run in small jobs during the day. Nonetheless there was no immediate remedy for loss of output when very large groups of students, such as entire academic departments, were involved.

Because of such considerations, a decision was made in 1989 to store degree audits, line by line, in sequential datasets, with one dataset for each batch job. Labels for a given line indicated which audit it belonged to and its relative position within that audit. This strategy made it possible to reprint even a very large set of audits quickly and with very modest demands on computer resources. Print logic was removed from rule evaluation modules, where it had resided previously, and was placed in a single print program which read through a particular, sorted text dataset and arranged the individual lines for printing. This version of the text storage option remained in place from 1989 until 1993.

Demands on the degree audit system continued to grow during the early 1990's. In particular, there was a desire to view and, if possible, create degree audits online. The highly resource-intensive nature of audit processing, which had limited audit creation to a batch environment from the very beginning, continued to be a factor. However, although real-time creation of audits remained an elusive goal, online display of audits was finally made available to advisers in 1993. This was brought about by a shift to the data storage strategy. By breaking down each audit into logical components and storing these on a database, we were able to assemble any requested audit programmatically for presentation in an online mainframe system. Since this strategy, still in use today, is somewhat involved, we will reserve detailed discussion for section II, below.

In 1995, the Office of the Registrar was given a mandate by the university�s provost to provide students with direct access to their degree audit information, as well as the ability to request additional degree audits. The data storage machinery already in place was exploited for this purpose, and a mainframe (TN3270) version of the new student access system went into service in 1996, under the name of IDA (Interactive Degree Audit). More recently, access to IDA has been made available over the World Wide Web.

II. Database structure used at UT Austin to store degree audits

The file used for storing degree audits at UT Austin is a very large and complex one. The extensive amount of data for each audit makes it necessary to spread that data out over a number of physical records on the database. To simplify the present discussion, however, we will use the term "degree audit record" to refer to the set of all component records for a given degree audit. In addition, we will refer only to the most significant data fields on each such "logical" record. (A full file listing is attached as an appendix.)

The basic unique key that lets the computer store and later retrieve a degree audit is formed of the following four fields: STUDENT-ID, CATALOG, DEGREE-PLAN & AUDIT-TYPE. A student may have one and only one audit that matches this key. For example, the student might have one English advising degree audit under the 1996-1998 catalog plus a second advising degree audit under the 1998-2000 catalog or a student-initiated English degree audit under the 1996-1998 catalog. New audits with the same key overwrite existing audits. This keeps the most current data on file and limits the size of the file.

Unique Key

STUDENT-ID ID number of student.

CATALOG Catalog under which this set of degree requirements originated. Undergraduate catalogs at UT Austin span 2 years.

DEGREE-PLAN Code that describes the college, department, major, and track of the degree plan of the audit (ex. L-A-SPN-BA is the code for a B.A. in Spanish from the College of Liberal Arts).

AUDIT-TYPE Describes whether the audit was created by an adviser or was student initiated.

A single degree plan may consist of up to 100 individual degree requirements. The results of processing these requirements are stored in an array on the degree audit record. Information on the requirement that is stored includes: RULE-NUMBER, RULE-TYPE, RULE-SATISFIED, RULE-TEXT, and COURSE-TABLE.

Array of requirements

RULE-NUMBER Number of requirement (001-999). Used to process requirements in strict order. (Although originally intended only as housekeeping data, RULE-NUMBER has proven a useful tool for discussing degree audits. Students and advisers now see rule numbers in both printed and online displays of audits.)

RULE-TYPE Arbitrary codes that determine how a specific requirement is evaluated and displayed. Some rule types select courses from a simple list (ex. 3 hours from either Chemistry, Biology, or Physics) or a from a complex list (PHY 314 OR (PHY 112 & PHY 312)). Others evaluate a student's GPA, compute total credit hours, compute total hours taken in residence, and so on. There are 27 rule types currently in use.

RULE-SATISFIED Switch that shows whether or not a requirement has been satisfied or not. Can indicate if this requirement was waived by the college for this student.

RULE-TEXT Text description of requirement plus any comments that the degree audit system inserted based on student's situation.

COURSE-TABLE List of courses used to satisfy a specific requirement. This is an array of department abbreviation & course number, semester course was taken, unique number of course section, type of course (residence, transfer, credit by exam,...), and hours of this course used for the requirement. (For economy of storage, we actually keep for each rule only a set of pointers to specific courses in the master list of coursework considered for the audit as a whole.)

III. IDA, the Interactive Degree Audit system

IDA, the Interactive Degree Audit, is the online delivery vehicle for degree audit information. As noted, this system exists in both a TN3270 version and a web version. Over 107,000 student inquiries have been made to IDA since 1996. It is a popular system with students.

The data storage option imposes a separation of evaluation logic from the presentation logic, and this has proved advantageous for delivery of degree audit data, especially in the web version of IDA. Web display logic is very different from the logic used to print a paper degree audit. To cite just one example, page size is not an issue on the web, as it is for paper printing or even for a TN3270 display. A web page simply accommodates the data. This means, for instance, that arcane codes can be translated into their actual textual values, making the degree audit more readable.

The data storage option also allows us to present audit information in a variety of ways. In IDA, as in other presentation formats, students can view a complete list of courses taken and a detailed listing of the requirements evaluated, with full narrative text and a list of courses counted for each requirement. In addition, IDA users have a number of more narrowly defined options, including:

� a summary of audit results

� a list of those requirements that were not satisfied

� a list of courses that will not count toward the degree

� a list of courses counted as elective hours only.

The summary display (Figure 2) lists the basic text of each requirement and whether that requirement has or has not been met. Using another powerful feature, hyperlinks, a student can click on a rule number and be taken to a display of the full detail for that single requirement. The rule numbers used to create the degree audit are the means by which the hyperlinks are implemented. Rule numbers display throughout IDA and are all always active links that will take a user to a full detail display of that individual rule.

Figure 2. Sample summary display from IDA

The list of remaining requirements is a subset of the summary display. It only displays requirements that have not been satisfied. It is a quick way for students to see what is left to complete the degree.

A student can also choose to view a display of courses that were not counted toward this degree audit. A course might not have been counted because the student failed the course or because it duplicated another course the student had taken. The display is presented in the format of the full detail but lists courses only that were not counted (Figure 3).

Figure 3. Sample NOT display from IDA

Finally, a student can view a list of courses that counted only as elective hours and not toward specific degree requirements (Figure 4). If a student feels a course should have counted toward a specific requirement, the student can petition the dean to apply the course toward the requirement.

Figure 4. Sample elective course display from IDA

A public demo version of IDA will soon be available via the web page for the Office of the Registrar at the University of Texas at Austin:

<http://www.utexas.edu/student/registrar/ida/index.html>.

This is not an interactive demo, just a series of static HTML pages that mimic the full IDA system. Choices are predetermined in this demo. Nonetheless, the full range of IDA's options are illustrated.

IV. Costs vs. benefits of storing degree audit data

The no-storage option is the simplest strategy, but also the most limiting. It requires no special program logic for retrieving display data and only minimal code for formatting. However, since only dynamic, real-time display is possible, there is little flexibility in the mode of presentation, and any second look at the audit requires that it be rebuilt. The intertwining of evaluation logic and display logic, virtually unavoidable with this approach, makes both types of code extremely difficult to modify. No doubt the no-storage option could be made to work in a relatively closed environment -- for example, on a stand-alone PC -- but, in the context of a multi-user mainframe system, it is inadequate.

The text storage option preserves most of the simplicity of the first strategy while eliminating some of its disadvantages. In particular, this approach could allow the audit text to be reprinted or redisplayed at virtually any time, without the cost of reevaluating the audit requirements. On other levels, though, flexibility would still be lacking. Display format would be inherent in the specific way the text was stored and would be very difficult to modify dynamically -- for example, in alternating between printed reports and online displays. Finally, text is by its very nature a non-compact medium, and the storage space required is fairly great.

The data storage option is the solution which has been adopted at UT Austin. There are a number of reasons for this:

1. This approach shares with the text storage option the advantage of making audits available for reprinting/redisplay without repeating the audit process.

2. Use of encoded data, rather than text, permits a considerable reduction in the space required for storing the audits.

3. Encoded data is very easy to analyze and can serve as the basis for numerous supplementary reports. For example, it is possible to spot trends, such as the frequency with which particular courses are used to satisfy degree requirements or the number of cases in which a specific rule has been waived or altered.

4. Most important, the separation of evaluation logic and presentation logic results in program code that is both easier to maintain and more versatile with regard to the types of display it allows. The degree audit results can be presented in a variety of different ways, on a number of different platforms. For example, an audit can be both printed on a paper form and displayed in HTML table form for the web. (The two displays contain the same basic information, but the programs that created the display are quite different.) In IDA, a student can view the details of the entire audit or just a brief summary of the requirements. As shown above, data from individual requirements, such as those which prevent coursework from counting toward the degree, can be extracted and displayed in isolation.

It has been our experience that these benefits, taken together, more than outweigh the costs imposed by the data storage option, namely, the added overhead of maintaining an ADABAS file (with the storage space that this requires) and the need for relatively complex and sophisticated programs to retrieve the data and assemble it for display.

Appendix